Descripteur
Documents disponibles dans cette catégorie (22)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Saliency-guided single shot multibox detector for target detection in SAR images / Lan Du in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)
[article]
Titre : Saliency-guided single shot multibox detector for target detection in SAR images Type de document : Article/Communication Auteurs : Lan Du, Auteur ; Lu Li, Auteur ; Di Wei, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 3366 - 3376 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] fusion de données
[Termes IGN] image radar moirée
[Termes IGN] saillanceRésumé : (auteur) The single shot multibox detector (SSD), a proposal-free method based on convolutional neural network (CNN), has recently been proposed for target detection and has found applications in synthetic aperture radar (SAR) images. Moreover, the saliency information reflected in the saliency map can highlight the target of interest while suppressing clutter, which is beneficial for better scene understanding. Therefore, in this article, we propose a saliency-guided SSD (S-SSD) for target detection in SAR images, in which we effectively integrate the saliency into the SSD network not only to suggest where to focus on but also to improve the representation capability in complex scenes. The proposed S-SSD contains two separated convolutional backbone subnetwork architectures, one with the original SAR image as input to extract features, and the other with the corresponding saliency map obtained from the modified Itti’s method as input to acquire refined saliency information under supervision. In addition, the dense connection structure, instead of the plain structure used in original SSD, is applied in the two convolutional backbone architectures to utilize multiscale information with fewer parameters. Then, for integrating saliency information to guide the network to emphasize informative regions, multilevel fusion modules are utilized to merge the two streams into a unified framework, thereby making the whole network end-to-end jointly trained. Finally, the convolutional predictors are used to predict targets. The experimental results on the miniSAR real data demonstrate that the proposed S-SSD can achieve better detection performance than state-of-the-art methods. Numéro de notice : A2020-237 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2953936 Date de publication en ligne : 11/12/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2953936 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94983
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 5 (May 2020) . - pp 3366 - 3376[article]
Titre : Saliency and Burstiness for Feature Selection in CBIR Type de document : Article/Communication Auteurs : Kamel Guissous , Auteur ; Valérie Gouet-Brunet , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2020 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : EUVIP 2019, 8th European Workshop on Visual Information Processing 28/10/2019 31/10/2019 Rome Italie Proceedings IEEE Importance : pp 111 - 116 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse visuelle
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] saillance
[Termes IGN] zone saillante 3DRésumé : (auteur) The paper addresses the problem of visual feature selection in content-based image retrieval (CBIR). We propose to study two strategies: the first one is using visual saliency, that selects the most salient features of the image and the second one exploits burstiness, that detects and processes the repeated visual elements in the image. To detect and describe the visual features in images, we rely on a deep local features approach based on convolutional neural network. The two strategies are evaluated for image retrieval on different datasets, according to two criteria: quality of retrieval and volume of manipulated features. Numéro de notice : C2019-027 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/EUVIP47703.2019.8946126 Date de publication en ligne : 02/01/2020 En ligne : https://ieeexplore.ieee.org/document/8946126 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94521 Extracting urban landmarks from geographical datasets using a random forests classifier / Yue Lin in International journal of geographical information science IJGIS, vol 33 n° 12 (December 2019)
[article]
Titre : Extracting urban landmarks from geographical datasets using a random forests classifier Type de document : Article/Communication Auteurs : Yue Lin, Auteur ; Yuyang Cai, Auteur ; Yue Gong, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 2406 - 2423 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] extraction automatique
[Termes IGN] gestion des itinéraires
[Termes IGN] jeu de données localisées
[Termes IGN] point de repère
[Termes IGN] précision de la classification
[Termes IGN] représentation mentale spatiale
[Termes IGN] saillance
[Termes IGN] Shenzhen
[Termes IGN] villeRésumé : (auteur) Urban landmarks are of significant importance to spatial cognition and route navigation. However, the current landmark extraction methods mainly focus on the visual salience of landmarks and are insufficient for obtaining high extraction accuracy when the size of the geographical dataset varies. This study introduces a random forests (RF) classifier combining with the synthetic minority oversampling technique (SMOTE) in urban landmark extraction. Both GIS and social sensing data are employed to quantify the structural and cognitive salience of the examined urban features, which are available from basic spatial databases or mainstream web service application programming interfaces (APIs). The results show that the SMOTE-RF model performs well in urban landmark extraction, with the values of recall, precision, F-measure and AUC reaching 0.851, 0.831, 0.841 and 0.841, respectively. Additionally, this method is suitable for both large and small geographical datasets. The ranking of variable importance given by this model further indicates that certain cognitive measures – such as feature class, Weibo popularity and Bing popularity – can serve as crucial factors for determining a landmark. The optimal variable combination for landmark extraction is also acquired, which might provide support for eliminating the variable selection requirement in other landmark extraction methods. Numéro de notice : A2019-426 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2019.1620238 Date de publication en ligne : 28/05/2019 En ligne : https://doi.org/10.1080/13658816.2019.1620238 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93559
in International journal of geographical information science IJGIS > vol 33 n° 12 (December 2019) . - pp 2406 - 2423[article]Saliency-guided deep neural networks for SAR image change detection / Jie Geng in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
[article]
Titre : Saliency-guided deep neural networks for SAR image change detection Type de document : Article/Communication Auteurs : Jie Geng, Auteur ; Xiaorui Ma, Auteur ; Xiaojun Zhou, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 7365 - 7377 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification floue
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection de changement
[Termes IGN] échantillonnage d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtre de déchatoiement
[Termes IGN] image radar moirée
[Termes IGN] logique floue
[Termes IGN] occupation du sol
[Termes IGN] saillance
[Termes IGN] télédétection en hyperfréquenceMots-clés libres : hierarchical fuzzy C-means clustering (HFCM) Résumé : (auteur) Change detection is an important task to identify land-cover changes between the acquisitions at different times. For synthetic aperture radar (SAR) images, inherent speckle noise of the images can lead to false changed points, which affects the change detection performance. Besides, the supervised classifier in change detection framework requires numerous training samples, which are generally obtained by manual labeling. In this paper, a novel unsupervised method named saliency-guided deep neural networks (SGDNNs) is proposed for SAR image change detection. In the proposed method, to weaken the influence of speckle noise, a salient region that probably belongs to the changed object is extracted from the difference image. To obtain pseudotraining samples automatically, hierarchical fuzzy C-means (HFCM) clustering is developed to select samples with higher probabilities to be changed and unchanged. Moreover, to enhance the discrimination of sample features, DNNs based on the nonnegative- and Fisher-constrained autoencoder are applied for final detection. Experimental results on five real SAR data sets demonstrate the effectiveness of the proposed approach. Numéro de notice : A2019-536 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2913095 Date de publication en ligne : 19/05/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2913095 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94154
in IEEE Transactions on geoscience and remote sensing > Vol 57 n° 10 (October 2019) . - pp 7365 - 7377[article]Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)
Titre : Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Kamel Guissous , Auteur ; Valérie Gouet-Brunet , Auteur ; Mireya S. García Vázquez, Auteur ; Alejandro A. Ramírez Acosta, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2018 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : IPTA 2018, 8th International Conference on Image Processing Theory, Tools and Applications 07/11/2018 10/11/2018 Xi'an Chine Proceedings IEEE Importance : pp 1 - 6 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] Bootstrap (statistique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] exploration de données
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] saillance
[Termes IGN] scène urbaineRésumé : (auteur) Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as "urban images"-targeted saliency maps we also compare in this paper. In present research we propose a "bootstrap" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction. Numéro de notice : C2018-097 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/IPTA.2018.8608125 Date de publication en ligne : 14/01/2019 En ligne : https://doi.org/10.1109/IPTA.2018.8608125 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95885 PermalinkRoad extraction based on snakes and sophisticated line extraction / Ivan Laptev (1997)Permalink