Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (579)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : XXIV ISPRS Congress, Commission 1 Type de document : Actes de congrès Auteurs : Nicolas Paparoditis , Éditeur scientifique ; Clément Mallet , Éditeur scientifique ; Florent Lafarge, Éditeur scientifique ; Stefan Hinz, Éditeur scientifique ; R. Feitosa, Éditeur scientifique ; Martin Weinmann, Éditeur scientifique ; Boris Jutzi, Éditeur scientifique Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2020 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 43-B1-2020 Conférence : ISPRS 2020, Commission 1, virtual Congress, Imaging today foreseeing tomorrow 31/08/2020 02/09/2020 Nice (en ligne) France ISPRS OA Archives Commission 1 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage automatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] étalonnage de capteur (imagerie)
[Termes IGN] lasergrammétrie
[Termes IGN] traitement d'imageNuméro de notice : 17625 Affiliation des auteurs : ENSG+Ext (2020- ) Thématique : IMAGERIE Nature : Actes nature-HAL : DirectOuvrColl/Actes DOI : sans Date de publication en ligne : 06/08/2020 En ligne : https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B1-2020/in [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97136 Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning / Clément Dechesne in Remote sensing, Vol 11 n° 24 (December-2 2019)
[article]
Titre : Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning Type de document : Article/Communication Auteurs : Clément Dechesne , Auteur ; Sébastien Lefèvre, Auteur ; Rodolphe Vadaine, Auteur ; Guillaume Hajduch, Auteur ; Ronan Fablet, Auteur Année de publication : 2019 Projets : SESAME / Fablet, Ronan Article en page(s) : n° 2997 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image Sentinel-SAR
[Termes IGN] navire
[Termes IGN] objet mobileRésumé : (auteur) The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m). Numéro de notice : A2019-632 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs11242997 Date de publication en ligne : 13/12/2019 En ligne : https://doi.org/10.3390/rs11242997 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95325
in Remote sensing > Vol 11 n° 24 (December-2 2019) . - n° 2997[article]An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data / Puzhao Zhang in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data Type de document : Article/Communication Auteurs : Puzhao Zhang, Auteur ; Andrea Nascetti, Auteur ; Yifang Ban, Auteur ; Maoguo Gong, Auteur Année de publication : 2019 Article en page(s) : pp 50 - 62 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] carte de la végétation
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] image multitemporelle
[Termes IGN] image radar moirée
[Termes IGN] incendie
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] Short Waves InfraRedRésumé : (auteur) Compared with optical sensors, the all-weather and day-and-night imaging ability of Synthetic Aperture Radar (SAR) makes it competitive for burnt area mapping. This study investigates the potential of Sentinel-1 C-band SAR sensors in burnt area mapping with an implicit Radar Convolutional Burn Index (RCBI). Based on multitemporal Sentinel-1 SAR data, a convolutional networks-based classification framework is proposed to learn the RCBI for highlighting the burnt areas. We explore the mapping accuracy level that can be achieved using SAR intensity and phase information for both VV and VH polarizations. Moreover, we investigate the decorrelation of Interferometric SAR (InSAR) coherence to wildfire events using different temporal baselines. The experimental results on two recent fire events, Thomas Fire (Dec., 2017) and Carr Fire (July, 2018) in California, demonstrate that the learnt RCBI has a better potential than the classical log-ratio operator in highlighting burnt areas. By exploiting both VV and VH information, the developed RCBI achieved an overall mapping accuracy of 94.68% and 94.17% on the Thomas Fire and Carr Fire, respectively. Numéro de notice : A2019-545 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.013 Date de publication en ligne : 04/10/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.013 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94189
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 50 - 62[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Combining Sentinel-1 and Sentinel-2 Satellite image time series for land cover mapping via a multi-source deep learning architecture / Dino Lenco in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Combining Sentinel-1 and Sentinel-2 Satellite image time series for land cover mapping via a multi-source deep learning architecture Type de document : Article/Communication Auteurs : Dino Lenco, Auteur ; Roberto Interdonato, Auteur ; Raffaele Gaetano, Auteur ; Ho Tong Minh Dinh, Auteur Année de publication : 2019 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] Burkina Faso
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] occupation du sol
[Termes IGN] Réunion, île de la
[Termes IGN] série temporelle
[Termes IGN] utilisation du solRésumé : (auteur) The huge amount of data currently produced by modern Earth Observation (EO) missions has allowed for the design of advanced machine learning techniques able to support complex Land Use/Land Cover (LULC) mapping tasks. The Copernicus programme developed by the European Space Agency provides, with missions such as Sentinel-1 (S1) and Sentinel-2 (S2), radar and optical (multi-spectral) imagery, respectively, at 10 m spatial resolution with revisit time around 5 days. Such high temporal resolution allows to collect Satellite Image Time Series (SITS) that support a plethora of Earth surface monitoring tasks. How to effectively combine the complementary information provided by such sensors remains an open problem in the remote sensing field. In this work, we propose a deep learning architecture to combine information coming from S1 and S2 time series, namely TWINNS (TWIn Neural Networks for Sentinel data), able to discover spatial and temporal dependencies in both types of SITS. The proposed architecture is devised to boost the land cover classification task by leveraging two levels of complementarity, i.e., the interplay between radar and optical SITS as well as the synergy between spatial and temporal dependencies. Experiments carried out on two study sites characterized by different land cover characteristics (i.e., the Koumbia site in Burkina Faso and Reunion Island, a overseas department of France in the Indian Ocean), demonstrate the significance of our proposal. Numéro de notice : A2019-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.016 Date de publication en ligne : 27/09/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94186
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019)[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees / Hamid Hamraz in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees Type de document : Article/Communication Auteurs : Hamid Hamraz, Auteur ; Nathan B. Jacobs, Auteur ; Marco A. Contreras, Auteur ; Chase H. Clark, Auteur Année de publication : 2019 Article en page(s) : pp 219 - 230 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] arbre caducifolié
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] houppier
[Termes IGN] modèle numérique de surface
[Termes IGN] Pinophyta
[Termes IGN] semis de pointsRésumé : (auteur) The purpose of this study was to investigate the use of deep learning for coniferous/deciduous classification of individual trees segmented from airborne LiDAR data. To enable processing by a deep convolutional neural network (CNN), we designed two discrete representations using leaf-off and leaf-on LiDAR data: a digital surface model with four channels (DSM × 4) and a set of four 2D views (4 × 2D). A training dataset of tree crowns was generated via segmentation of tree crowns, followed by co-registration with field data. Potential mislabels due to GPS error or tree leaning were corrected using a statistical ensemble filtering procedure. Because the training data was heavily unbalanced (~8% conifers), we trained an ensemble of CNNs on random balanced sub-samples. Benchmarked against multiple traditional shallow learning methods using manually designed features, the CNNs improved accuracies up to 14%. The 4 × 2D representation yielded similar classification accuracies to the DSM × 4 representation (~82% coniferous and ~90% deciduous) while converging faster. Further experimentation showed that early/late fusion of the channels in the representations did not affect the accuracies in a significant way. The data augmentation that was used for the CNN training improved the classification accuracies, but more real training instances (especially coniferous) likely results in much stronger improvements. Leaf-off LiDAR data were the primary source of useful information, which is likely due to the perennial nature of coniferous foliage. LiDAR intensity values also proved to be useful, but normalization yielded no significant improvement. As we observed, large training data may compensate for the lack of a subset of important domain data. Lastly, the classification accuracies of overstory trees (~90%) were more balanced than those of understory trees (~90% deciduous and ~65% coniferous), which is likely due to the incomplete capture of understory tree crowns via airborne LiDAR. In domains like remote sensing and biomedical imaging, where the data contain a large amount of information and are not friendly to human visual system, human-designed features may become suboptimal. As exemplified by this study, automatic, objective derivation of optimal features via deep learning can improve prediction tasks in such domains. Numéro de notice : A2019-547 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.10.011 Date de publication en ligne : 03/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.10.011 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94192
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 219 - 230[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)PermalinkMatching of TerraSAR-X derived ground control points to optical image patches using deep learning / Tatjana Bürgmann in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)PermalinkComparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using Landsat images / Cheolhee Yoo in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)PermalinkContext pyramidal network for stereo matching regularized by disparity gradients / Junhua Kang in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)PermalinkDeep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery / Yuri Shendryk in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)PermalinkSig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)PermalinkAccurate detection of built-up areas from high-resolution remote sensing imagery using a fully convolutional network / Yihua Tan in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 10 (October 2019)PermalinkA CNN-based subpixel level DSM generation approach via single image super-resolution / Yongjun Zhang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 10 (October 2019)PermalinkMapping dead forest cover using a deep convolutional neural network and digital aerial photography / Jean-Daniel Sylvain in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)PermalinkMulti-sensor prediction of Eucalyptus stand volume: A support vector approach / Guilherme Silverio Aquino de Souza in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)Permalink