Résumé : |
(auteur) Although theorised about fifteen years ago, the scientific community’s interest for graph neural networks has only really taken off recently. Those models aim to transpose the representation learning capacity inherent in deep neural network onto graph data, via the learning of hidden states associated with the graph nodes. These hidden states are computed and updated according to the information contained in the neighborhoud of each node.This recent interest for graph neural networks (GNNs) has led to a "jungle" of models and frameworks, making this field of research sometimes confusing. Historically, two main strategies have been explored : the spatial GNNs on one side and the spectral GNNs on the other side. Spatial GNNs, sometimes also called Message Passing Neural Network, are based on the computation of a message which agregates the information contained in the neighborhoud of each node. On the other side, spectral GNNs are based on the spectral graph theory and thus on the graph Laplacian. The eigendecomposition of the graph Laplacian allows to define a graph Fourier transform and its inverse. From these transforms, different filters can be applied on the graph, leading to similar result than filtering on images or signals. In this thesis, we begin by introducing a third category, called spectral rooted spatial convolution. Indeed, some recent methods are taking root in the spectral domain while avoiding to compute the eigendecomposition of the graph Laplacian. This third category leads to question about the fundamental difference between spectral and spatial GNNs. We answer this question by proposing a general model unifying both strategies, showing notably that spectral GNNs are a particular case of spatial GNNs. This unified model also allowed us to propose a spectral analysis of some popular GNNs in the scientificcommunitic, namely GCN, GIN, GAT, ChebNet and CayleyNet. This analysis shows that spatial models are limited to low-pass and high-pass filtering, while spectral models can produce any kind of filters. Those results are then found with the presentation of a toy problem, showing in the first instance the limitation of spatial models to define pass-band filters, and the importance of designing such filters. Those results have led us to propose a method allowing any kind of filter, while limiting the network’s number of parameters. Indeed, even though spectral models are able to design any kind of filtering, each new filter require the add of a new weight matrix in the neural network. In order to reduce the number of parameters, we propose to adapt Depthwise Separable Convolution to graphs through a method called Depthwise Separable Graph Convolution Network. This method is evaluated on both transductive and inductive learning, outperforming state-of-the-arts results.Finally, we propose a method defined in the spatial domain in order to take into account edge attributes. Indeed, this issue has been little studied by the scientific community, and the number of methods allowing to include edge attributes is very small. Our proposal, called Edge Embedding Graph Neural Network, consists in embedding edge attributes into a new space through a first neural network, before using the extracted features in a GNN. This method is evaluated on a particular problem of symbol detection in a graph. |