Détail de l'auteur
Auteur Jenny Benois-Pineau |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Visual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)
[article]
Titre : Visual vs internal attention mechanisms in deep neural networks for image classification and object detection Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Mireya S. García Vázquez, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108411 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] oculométrie
[Termes IGN] saillance
[Termes IGN] segmentation sémantique
[Termes IGN] visualisation de donnéesRésumé : (auteur) The so-called “attention mechanisms” in Deep Neural Networks (DNNs) denote an automatic adaptation of DNNs to capture representative features given a specific classification task and related data. Such attention mechanisms perform both globally by reinforcing feature channels and locally by stressing features in each feature map. Channel and feature importance are learnt in the global end-to-end DNNs training process. In this paper, we present a study and propose a method with a different approach, adding supplementary visual data next to training images. We use human visual attention maps obtained independently with psycho-visual experiments, both in task-driven or in free viewing conditions, or powerful models for prediction of visual attention maps. We add visual attention maps as new data alongside images, thus introducing human visual attention into the DNNs training and compare it with both global and local automatic attention mechanisms. Experimental results show that known attention mechanisms in DNNs work pretty much as human visual attention, but still the proposed approach allows a faster convergence and better performance in image classification tasks. Numéro de notice : A2022-197 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2021.108411 Date de publication en ligne : 12/11/2021 En ligne : https://doi.org/10.1016/j.patcog.2021.108411 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99988
in Pattern recognition > vol 123 (March 2022) . - n° 108411[article]Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)
Titre : Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Kamel Guissous , Auteur ; Valérie Gouet-Brunet , Auteur ; Mireya S. García Vázquez, Auteur ; Alejandro A. Ramírez Acosta, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2018 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : IPTA 2018, 8th International Conference on Image Processing Theory, Tools and Applications 07/11/2018 10/11/2018 Xi'an Chine Proceedings IEEE Importance : pp 1 - 6 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] Bootstrap (statistique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] exploration de données
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] saillance
[Termes IGN] scène urbaineRésumé : (auteur) Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as "urban images"-targeted saliency maps we also compare in this paper. In present research we propose a "bootstrap" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction. Numéro de notice : C2018-097 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/IPTA.2018.8608125 Date de publication en ligne : 14/01/2019 En ligne : https://doi.org/10.1109/IPTA.2018.8608125 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95885