The F-transform on graphs: A novel interpretable layer for graph convolutional networks with application to brain connectome analysis
This paper extends the F-transform to weighted graphs, creating a new interpretable ‘F-Conv’ layer for GCNs. We apply it to brain connectome analysis, achieving high accuracy with full model transparency, unlike traditional black-box GCNs.
F-transform, Graph convolutional networks, Explainable AI, Brain connectome, Fuzzy partition, Interpretable models, F-Conv
Graph Convolutional Networks (GCNs) have become a powerful tool for analyzing graph-structured data like brain connectomes. However, their internal workings remain opaque, hindering clinical translation. The F-transform, a technique from fuzzy systems theory, is known for its ability to create interpretable models. In this paper, we extend the F-transform from its traditional domain to weighted graphs. We define the direct and inverse F-transforms on graphs, establish the conditions for reconstruction, and propose a new, fully interpretable GCN layer based on this transform. We demonstrate the utility of our “F-Conv” layer by applying it to the problem of classifying autism spectrum disorder from functional connectomes, showing performance comparable to standard GCNs but with the significant advantage of model transparency.
Introduction
The brain connectome, modeled as a weighted graph, provides a powerful representation of neural circuitry. Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art method for learning from such data, achieving high accuracy in disease classification tasks. However, the convolutional filters learned by GCNs are difficult to interpret, limiting our understanding of the neurobiological patterns they discover.
The F-transform, or fuzzy transform, is a technique for function approximation that has recently been used to design interpretable deep learning models for images. Its strength lies in decomposing a signal with respect to a “fuzzy partition” of the domain, yielding components that have a clear semantic meaning.
To date, the F-transform has not been extended to graph-structured data. This paper bridges that gap. Our contributions are:
- A formal definition of a fuzzy partition on a weighted graph.
- The formulation of the direct and inverse F-transforms for functions defined on the vertices of a graph, along with a reconstruction error theorem.
- The design of an “F-Conv” layer for GCNs, where the convolution is performed via the F-transform, making the learned filters interpretable as fuzzy subgraphs.
- An application to brain connectome analysis, showing our model’s ability to identify clinically relevant network patterns.
Methodology and expected theoretical results
The research will proceed in two main stages: theoretical development and experimental validation.
Theoretical development: F-transform on graphs
Let G=(V, E, w) be a weighted graph. Let a function f: V \to \mathbb{R} represent a signal on the vertices (e.g., activation levels). 1. Fuzzy partition of a graph: First, we must define a fuzzy partition of the vertex set V. We will define a set of fuzzy sets \{A_1, \ldots, A_n\} on V that are “localized” on the graph. A natural approach is to define them based on graph clustering or community detection, where each fuzzy set represents a brain network or functional module. We will establish the formal conditions these fuzzy sets must satisfy to be considered a valid fuzzy partition. 2. Direct and inverse transforms: The direct F-transform will map the function f to a vector of components [F_1, \ldots, F_n], where each component F_k represents the average value of f over the fuzzy region A_k. The inverse F-transform will reconstruct an approximation \hat{f} of the original function from these components. 3. Main theoretical result: The key result will be a theorem that bounds the reconstruction error \|f - \hat{f}\| based on properties of the function f (e.g., its smoothness over the graph, measured by the graph Laplacian) and the fuzzy partition (e.g., the “size” of the fuzzy regions).
Methodology: The F-Conv layer
The F-Conv layer will replace the standard spectral or spatial convolution in a GCN. The “filters” of this layer will be the basis functions of the fuzzy partition. The output of the layer will be the F-transform components, which can be passed to subsequent layers. Because the basis functions are predefined and interpretable (e.g., as “the visual network” or “the default mode network”), the weights learned by the network will directly indicate the importance of these specific brain networks for the classification task.
Work plan
- Months 1-3: Develop the complete theory of the F-transform on graphs. Prove the main reconstruction theorem. This will be done in close collaboration with Prof. Perfilieva’s group.
- Months 4-5: Implement the F-Conv layer in a deep learning framework like PyTorch or TensorFlow.
- Months 6-9: Apply the F-Conv GCN model to the ABIDE dataset for autism classification. Compare its performance (accuracy, AUC) and interpretability against standard GCN models.
- Months 10-12: Write the manuscript, focusing on both the novel theoretical contribution and the significant application in computational neuroscience.
Potential target journals
- IEEE Transactions on Fuzzy Systems (Q1): The premier journal for novel theoretical developments in fuzzy logic and their applications. Perfect for the core theoretical contribution.
- IEEE Transactions on Neural Networks and Learning Systems (Q1): A top venue for new neural network architectures and learning algorithms.
- Medical Image Analysis (MedIA) (Q1): An excellent choice if the results on the connectome analysis are particularly strong, as it is a leading journal in the field.
Minimum viable article (MVA) strategy
This project has a very clear and strong split between the core theory and the machine learning application.
- Paper 1 (The MVA - theoretical foundation):
- Scope: Introduce the complete theory of the F-transform on weighted graphs. This includes the formal definition of a fuzzy partition on a graph, the definition of the direct and inverse transforms, and the full proof of the reconstruction error theorem.
- Goal: To establish this new mathematical tool in the fuzzy systems and graph theory literature.
- Target venue: IEEE Transactions on Fuzzy Systems.
- Paper 2 (The application - the F-Conv layer):
- Scope: This paper would briefly cite Paper 1 for the theory and then focus entirely on the application. It would introduce the “F-Conv” layer as a novel, interpretable architecture for GCNs. The core of the paper would be the extensive experimental validation on the brain connectome (e.g., ABIDE) dataset, comparing it to standard GCNs and demonstrating its superior interpretability.
- Goal: To introduce a new, white-box GCN model for the XAI and computational neuroscience communities.
- Target venue: IEEE Transactions on Neural Networks and Learning Systems or Medical Image Analysis.