Blog

Principle of Relevant Information for Graph Sparsification

May 20, 2022

Many complex structures and phenomena are naturally described as graphs, such as social networks, image-based brain functional connectivity networks, and climate causal effect networks, and graphs are the essential ingredients in graph neural networks for deep learning.

Given a large graph with hundreds or thousands of edges, how can we remove the redundant or less-informative edges in a graph such that the resulting subgraph maintains the important structural properties for better interpretability and better predictions? In this paper, we propose the first general and effective information-theoretic formulation of graph sparsification to find such a subgraph, by taking inspiration from the so-called Principle of Relevant Information (PRI). We provide both theoretical and empirical justifications on the validity of our Graph-PRI. We also apply our Graph-PRI to problems including medical imaging-derived brain network classification providing an entirely new way to highlight the important connections in the brain network for better interpretability.

The contributing functional connectivity links for MCI patients. We visualize edges with a probability of more than 50% been selected by our generated edges. The colors of neural systems are described as: sensorimotor network (SMN), occipital network(ON), fronto-parietal network (FPN), default mode network (DMN), cingulo-opercular network (CON), and cerebellum network (CN), respectively.

This is a joint work together between Visual Intelligence, the NEC Labs Europe, the University of Amsterdam, and the University of Florida. Full text publication will be available shortly.

Publication

Principle of Relevant Information for Graph Sparsification

May 20, 2022

Shujian Yu, Francesco Alesiani, Wenzhe Yin, Robert Jenssen and Jose C. Principe

Paper abstract

Graph sparsification aims to reduce the number of edges of a graph while maintaining its structural properties. In this paper, we propose the first general and effective information-theoretic formulation of graph sparsification, by taking inspiration from the Principle of Relevant Information (PRI).To this end, we extend the PRI from a standard scalar random variable setting to structured data(i.e., graphs). Our Graph-PRI objective is achieved by operating on the graph Laplacian, made possible by expressing the graph Laplacian of a subgraph in terms of a sparse edge selection vector w. We provide both theoretical and empirical justifications on the validity of our Graph-PRI. We also analyze its analytical solutions in a few special cases. We finally present three representative real world applications, namely graph sparsification, graph regularized multi-task learning, and medical imaging-derived brain network classification, to demonstrate the effectiveness, the versatility and the enhanced interpretability of our approach over prevalent sparsification techniques.