Détail de l'auteur
Auteur Guillaume Renton |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Titre : Analyzing and improving Graph Neural Networks Type de document : Thèse/HDR Auteurs : Guillaume Renton, Auteur ; Sébastien Adam, Auteur Editeur : Université de Rouen Année de publication : 2021 Importance : 130 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour obtenir le diplôme de Doctorat de l'Université de Rouen Normandie, spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] filtre passe-bande
[Termes IGN] filtre spectral
[Termes IGN] noeud
[Termes IGN] réseau neuronal de graphes
[Termes IGN] théorie des graphes
[Termes IGN] transformation de Laplace
[Termes IGN] transformation inverseIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Although theorised about fifteen years ago, the scientific community’s interest for graph neural networks has only really taken off recently. Those models aim to transpose the representation learning capacity inherent in deep neural network onto graph data, via the learning of hidden states associated with the graph nodes. These hidden states are computed and updated according to the information contained in the neighborhoud of each node.This recent interest for graph neural networks (GNNs) has led to a "jungle" of models and frameworks, making this field of research sometimes confusing. Historically, two main strategies have been explored : the spatial GNNs on one side and the spectral GNNs on the other side. Spatial GNNs, sometimes also called Message Passing Neural Network, are based on the computation of a message which agregates the information contained in the neighborhoud of each node. On the other side, spectral GNNs are based on the spectral graph theory and thus on the graph Laplacian. The eigendecomposition of the graph Laplacian allows to define a graph Fourier transform and its inverse. From these transforms, different filters can be applied on the graph, leading to similar result than filtering on images or signals. In this thesis, we begin by introducing a third category, called spectral rooted spatial convolution. Indeed, some recent methods are taking root in the spectral domain while avoiding to compute the eigendecomposition of the graph Laplacian. This third category leads to question about the fundamental difference between spectral and spatial GNNs. We answer this question by proposing a general model unifying both strategies, showing notably that spectral GNNs are a particular case of spatial GNNs. This unified model also allowed us to propose a spectral analysis of some popular GNNs in the scientificcommunitic, namely GCN, GIN, GAT, ChebNet and CayleyNet. This analysis shows that spatial models are limited to low-pass and high-pass filtering, while spectral models can produce any kind of filters. Those results are then found with the presentation of a toy problem, showing in the first instance the limitation of spatial models to define pass-band filters, and the importance of designing such filters. Those results have led us to propose a method allowing any kind of filter, while limiting the network’s number of parameters. Indeed, even though spectral models are able to design any kind of filtering, each new filter require the add of a new weight matrix in the neural network. In order to reduce the number of parameters, we propose to adapt Depthwise Separable Convolution to graphs through a method called Depthwise Separable Graph Convolution Network. This method is evaluated on both transductive and inductive learning, outperforming state-of-the-arts results.Finally, we propose a method defined in the spatial domain in order to take into account edge attributes. Indeed, this issue has been little studied by the scientific community, and the number of methods allowing to include edge attributes is very small. Our proposal, called Edge Embedding Graph Neural Network, consists in embedding edge attributes into a new space through a first neural network, before using the extracted features in a GNN. This method is evaluated on a particular problem of symbol detection in a graph. Note de contenu : 1- Introduction
2- Background
3- What is a Graph Neural Network ?
4- Graph Neural Networks: Are they Spectral or Spatial ?
5- Depthwise Separable Graph Convolution Network
6- Edge Embedding Graph Neural Network
7- ConclusionNuméro de notice : 15259 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE/MATHEMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Rouen : 2021 Organisme de stage : Laboratoire LITIS DOI : sans En ligne : https://tel.hal.science/tel-03346018/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100612