Détail de l'auteur
Auteur Claude Berrou |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Sparsity, redundancy and robustness in artificial neural networks for learning and memory / Philippe Tigréat (2017)
Titre : Sparsity, redundancy and robustness in artificial neural networks for learning and memory Type de document : Thèse/HDR Auteurs : Philippe Tigréat, Auteur ; Claude Berrou, Directeur de thèse Editeur : Institut Mines-Télécom Atlantique IMT Atlantique Année de publication : 2017 Autre Editeur : Université Bretagne Loire Importance : 150 P. Format : 21 x 30 cm Note générale : bibliographie
Thèse IMT Atlantique sous le sceau de l’Université Bretagne Loire pour obtenir le grade de Docteur, Signal, Image, VisionLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage (cognition)
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] codage
[Termes IGN] cognition
[Termes IGN] mémoire
[Termes IGN] reconnaissance de formes
[Termes IGN] stockage de donnéesIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The objective of research in Artificial Intelligence (AI) is to reproduce human cognitive abilities by means of modern computers. The results of the last few years seem to announce a technological revolution that could profoundly change society. We focus our interest on two fundamental cognitive aspects, learning and memory. Associative memories offer the possibility to store information elements and to retrieve them using a sub-part of their content, thus mimicking human memory. Deep Learning allows to transition from an analog perception of the outside world to a sparse and more compact representation.In Chapter 2, we present a neural associative memory model inspired by Willshaw networks, with constrained connectivity. This brings an performance improvement in message retrieval and a more efficient storage of information.In Chapter 3, a convolutional architecture was applied on a task of reading partially displayed words under similar conditions as in a former psychology study on human subjects. This experiment put inevidence the similarities in behavior of the network with the human subjects regarding various properties of the display of words.Chapter 4 introduces a new method for representing categories usingneuron assemblies in deep networks. For problems with a large number of classes, this allows to reduce significantly the dimensions of a network.Chapter 5 describes a method for interfacing deep unsupervised networks with clique-based associative memories. Note de contenu : 1- Introduction
2- Sparse Neural Associative Memories
3- Robustness of Deep Neural Networks to Erasures in a Reading Task
4- Assembly Output Codes for Learning Neural Networks
5- Combination of Unsupervised Learning and Associative Memory
6- Conclusion and OpeningsNuméro de notice : 25836 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE/MATHEMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Signal, Image, Vision : Mines-Télécom Atlantique : 2017 Organisme de stage : Laboratoire Labsticc nature-HAL : Thèse DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-01812053 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95178