Détail de la série
Multimodal scene understanding: algorithms, applications and deep learning |
Documents disponibles dans cette série (2)



Multimodal scene understanding: algorithms, applications and deep learning, ch. 11. Decision fusion of remote-sensing data for land cover classification / Arnaud Le Bris (2019)
![]()
Titre de série : Multimodal scene understanding: algorithms, applications and deep learning, ch. 11 Titre : Decision fusion of remote-sensing data for land cover classification Type de document : Chapitre/Contribution Auteurs : Arnaud Le Bris , Auteur ; Nesrine Chehata
, Auteur ; Walid Ouerghemmi
, Auteur ; Cyril Wendl, Auteur ; Tristan Postadjian
, Auteur ; Anne Puissant, Auteur ; Clément Mallet
, Auteur
Editeur : Londres, New York : Academic Press Année de publication : 2019 Importance : pp 341 - 382 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification dirigée
[Termes IGN] fusion de données multisource
[Termes IGN] image à très haute résolution
[Termes IGN] image Sentinel-MSI
[Termes IGN] image SPOT 6
[Termes IGN] image SPOT 7
[Termes IGN] occupation du sol
[Termes IGN] série temporelle
[Termes IGN] zone urbaineRésumé : (Auteur) Very high spatial resolution (VHR) multispectral imagery enables a fine delineation of objects and a possible use of texture information. Other sensors provide a lower spatial resolution but an enhanced spectral or temporal information, permitting one to consider richer land cover semantics. So as to benefit from the complementary characteristics of these multimodal sources, a decision late fusion scheme is proposed. This makes it possible to benefit from the full capacities of each sensor, while dealing with both semantic and spatial uncertainties. The different remote-sensing modalities are first classified independently. Separate class membership maps are calculated and then merged at the pixel level, using decision fusion rules. A final label map is obtained from a global regularization scheme in order to deal with spatial uncertainties while conserving the contrasts from the initial images. It relies on a probabilistic graphical model involving a fit-to-data term related to merged class membership measures and an image-based contrast-sensitive regularization term. Conflict between sources can also be integrated into this scheme. Two experimental cases are presented. In the first case one considers the fusion of VHR multispectral imagery with lower spatial resolution hyperspectral imagery for fine-grained land cover classification problem in dense urban areas. In the second case one uses SPOT 6/7 satellite imagery and Sentinel-2 time series to extract urban area footprints through a two-step process: classifications are first merged in order to detect building objects, from which a urban area prior probability is derived and eventually merged to Sentinel-2 classification output for urban footprint detection. Numéro de notice : H2019-002 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Chapître / contribution nature-HAL : ChOuvrScient DOI : 10.1016/B978-0-12-817358-9.00017-2 Date de publication en ligne : 02/08/2019 En ligne : https://doi.org/10.1016/B978-0-12-817358-9.00017-2 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93303 Multimodal scene understanding: algorithms, applications and deep learning, ch. 8. Multimodal localization for embedded systems: a survey / Imane Salhi (2019)
![]()
Titre de série : Multimodal scene understanding: algorithms, applications and deep learning, ch. 8 Titre : Multimodal localization for embedded systems: a survey Type de document : Chapitre/Contribution Auteurs : Imane Salhi , Auteur ; Martyna Poreba
, Auteur ; Erwan Piriou, Auteur ; Valérie Gouet-Brunet
, Auteur ; Maroun Ojail, Auteur
Editeur : Londres, New York : Academic Press Année de publication : 2019 Importance : pp 199 - 278 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compréhension de l'image
[Termes IGN] fusion de données
[Termes IGN] géopositionnement
[Termes IGN] instrument embarqué
[Termes IGN] navigation automobile
[Termes IGN] navigation pédestre
[Termes IGN] réalité mixteRésumé : (Auteur) Localization by jointly exploiting multimodal information, like cameras, inertial measurement units (IMU), and global navigation satellite system (GNSS) data, is an active key research topic for autonomous embedded systems such as smart glasses or drones. These systems have become topical for acquisition, modeling, and interpretation for scene understanding. The exploitation of different sensor types improves the robustness of the localization, e.g. by merging the accuracy of one sensor with the reactivity of another one in a flexible manner. This chapter presents a survey of the existing multimodal techniques dedicated to the localization of autonomous embedded systems. Both the algorithmic and the hardware architecture sides are investigated in order to provide a global overview of the key elements to be considered when designing these embedded systems. Several applications in different domains (e.g. localization for mapping, pedestrian localization, automotive navigation and mixed reality) are presented to illustrate the importance of such systems nowadays in scene understanding. Numéro de notice : H2019-001 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Chapître / contribution nature-HAL : ChOuvrScient DOI : 10.1016/B978-0-12-817358-9.00014-7 Date de publication en ligne : 02/08/2019 En ligne : https://doi.org/10.1016/B978-0-12-817358-9.00014-7 Format de la ressource électronique : URL chapitre Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93300