Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (647)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery / Yuri Shendryk in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)
[article]
Titre : Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery Type de document : Article/Communication Auteurs : Yuri Shendryk, Auteur ; Yannik Rist, Auteur ; Catherine Ticehurst, Auteur ; Peter Thorburn, Auteur Année de publication : 2019 Article en page(s) : pp 124 - 136 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Amazonie
[Termes IGN] apprentissage profond
[Termes IGN] Australie
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'ombre
[Termes IGN] état de l'art
[Termes IGN] image à haute résolution
[Termes IGN] image PlanetScope
[Termes IGN] image Sentinel-MSI
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] nuage
[Termes IGN] occupation du sol
[Termes IGN] zone tropicale humideRésumé : (Auteur) With the increasing availability of high-resolution satellite imagery it is important to improve the efficiency and accuracy of satellite image indexing, retrieval and classification. Furthermore, there is a need for utilizing all available satellite imagery in identifying general land cover types and monitoring their changes through time irrespective of their spatial, spectral, temporal and radiometric resolutions. Therefore, in this study, we developed deep learning models able to efficiently and accurately classify cloud, shadow and land cover scenes in different high-resolution ( Numéro de notice : A2019-494 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.08.018 Date de publication en ligne : 17/09/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.08.018 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93727
in ISPRS Journal of photogrammetry and remote sensing > vol 157 (November 2019) . - pp 124 - 136[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019113 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
[article]
Titre : Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery Type de document : Article/Communication Auteurs : Ruchan Dong, Auteur ; Dazhuan Xu, Auteur ; Jin Zhao, Auteur ; Licheng Jiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 8534 - 8545 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] régression
[Termes IGN] zone d'intérêtRésumé : (auteur) Small target detection is a challenging task in veryhigh-resolution (VHR) optical remote sensing imagery, because small targets occupy a minuscule number of pixels and are easily disturbed by backgrounds or occluded by others. Although current convolutional neural network (CNN)-based approaches perform well when detecting normal objects, they are barely suitable for detecting small ones. Two practical problems stand in their way. First, current CNN-based approaches are not specifically designed for the minuscule size of small targets (~15 or ~10 pixels in extent). Second, no well-established data sets include labeled small targets and establishing one from scratch is labor-intensive and time-consuming. To address these two issues, we propose an approach that combines Sig-NMS-based Faster R-CNN with transfer learning. Sig-NMS replaces traditional non-maximum suppression (NMS) in the stage of region proposal network and decreases the possibility of missing small targets. Transfer learning can effectively label remote sensing images by automatically annotating both object classes and object locations. We conduct an experiment on three data sets of VHR optical remote sensing images, RSOD, LEVIR, and NWPU VHR-10, to validate our approach. The results demonstrate that the proposed approach can effectively detect small targets in the VHR optical remote sensing images of about 10 × 10 pixels and automatically label small targets as well. In addition, our method presents better mean average precisions than other state-of-the-art methods: 1.5% higher when performing on the RSOD data set, 17.8% higher on the LEVIR data set, and 3.8% higher on NWPU VHR-10. Numéro de notice : A2019-595 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2921396 Date de publication en ligne : 15/07/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2921396 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94587
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 11 (November 2019) . - pp 8534 - 8545[article]Combining machine learning and compact polarimetry for estimating soil moisture from C-Band SAR data / Emanuele Santi in Remote sensing, Vol 11 n° 20 (October-2 2019)
[article]
Titre : Combining machine learning and compact polarimetry for estimating soil moisture from C-Band SAR data Type de document : Article/Communication Auteurs : Emanuele Santi, Auteur ; Mohammed Dabboor, Auteur ; Simone Pettinato, Auteur ; Simonetta Paloscia, Auteur Année de publication : 2019 Article en page(s) : 18 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage automatique
[Termes IGN] bande C
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] humidité du sol
[Termes IGN] image radar moirée
[Termes IGN] image Radarsat
[Termes IGN] Manitoba (Canada)
[Termes IGN] polarimétrie
[Termes IGN] polarisation
[Termes IGN] réseau neuronal artificiel
[Termes IGN] série temporelle
[Termes IGN] surface cultivéeRésumé : (auteur) This research aimed at exploiting the joint use of machine learning and polarimetry for improving the retrieval of surface soil moisture content (SMC) from synthetic aperture radar (SAR) acquisitions at C-band. The study was conducted on two agricultural areas in Canada, for which a series of RADARSAT-2 (RS2) images were available along with direct measurements of SMC from in situ stations. The analysis confirmed the sensitivity of RS2 backscattering (O°) to SMC. The comparison of SMC with the compact polarimetry (CP) parameters, computed from the RS2 acquisitions by the CP data simulator, pointed out that some CP parameters had a sensitivity to SMC equal or better than O°, with correlation coe?cients up to R ' 0.4. Based on these results, the potential of machine learning (ML) for SMC retrieval was exploited by implementing and testing on the available data an artificial neural network (ANN) algorithm. The algorithm was implemented using several combinations of O° and CP parameters. Validation results of the algorithm with in situ observations confirmed the promising capabilities of the ML techniques for SMC monitoring. Furthermore, results pointed out the potential of CP in improving the SMC retrieval accuracy, especially when used in combination with linearly polarized O°. Depending on the considered input combination, the ANN algorithm was able to estimate SMC with Root Mean Square Error (RMSE) between 3% and 7% of SMC and R between 0.7 and 0.9. Numéro de notice : A2019-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs11202451 Date de publication en ligne : 22/10/2019 En ligne : https://doi.org/10.3390/rs11202451 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94210
in Remote sensing > Vol 11 n° 20 (October-2 2019) . - 18 p.[article]Comparative analysis of the accuracy of surface soil moisture estimation from the C- and L-bands / Mohammad El Hajj in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)
[article]
Titre : Comparative analysis of the accuracy of surface soil moisture estimation from the C- and L-bands Type de document : Article/Communication Auteurs : Mohammad El Hajj, Auteur ; Nicolas Baghdadi, Auteur ; Mehrez Zribi, Auteur Année de publication : 2019 Article en page(s) : 13 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] analyse comparative
[Termes IGN] bande C
[Termes IGN] bande L
[Termes IGN] humidité du sol
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] Normalized Difference Water Index
[Termes IGN] réseau neuronal artificiel
[Termes IGN] surface cultivéeRésumé : (auteur) Surface soil moisture (SSM) estimation is of great importance in several areas, such as hydrology, agriculture and risk assessment. C-band SAR (synthetic aperture radar) data have been widely used to estimate SSM, whereas few studies have been performed using L-band SAR due to the low availability of L-band SAR data. In this context, the objective of the present paper is to compare the SSM estimation potentials of the C- (Sentinel-1) and L-bands (PALSAR) for wheat and grassland plots. The inversion approach developed in this study uses neural networks to invert the SAR signal and estimate the SSM. For each radar frequency, the developed neural networks were trained using the following as an input vector: SAR incidence angle, SAR polarization (VV for the C-band and HH for the L-band), and NDVI from optical images. Artificial Neural networks (ANNs) were developed and validated using synthetic and real databases. The results showed that the L-band provided slightly less accurate SSM estimates than the C-band. Moreover, the results showed that the accuracies of the SSM estimates for both frequencies strongly depended on the soil roughness (Hrms) and SSM values. From the synthetic database at SSM values less than 25 vol.%, the ANNs underestimated the SSM for Hrms values less than 1.5 cm and overestimated the SSM for Hrms values greater than 1.5 cm. In addition, the ANNs underestimated the SSM value regardless of the Hrms value when the SSM value was greater than 25 vol.%. An RMSE analysis of the SSM estimates showed that the highest RMSE values were observed for the L-band regardless of the SSM value, and high RMSE values were observed for the C-band only in very wet soil conditions (SSM>25 vol.%). From the real database at NDVI values less than 0.7, the RMSE (root mean square error) of the SSM estimates was 4.6 vol.% for the C-band and 5.3 vol.% for the L-band. Most importantly, the L-band enabled the estimation of the SSM under a well-developed vegetation cover (NDVI > 0.7) with an RMSE of 6.7 vol.%, whereas the C-band SAR signal became completely attenuated for some crops when the NDVI value was greater than 0.7, and thus the estimation of SSM was impossible using the C-band. Numéro de notice : A2019-473 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.jag.2019.05.021 Date de publication en ligne : 29/06/2019 En ligne : https://doi.org/10.1016/j.jag.2019.05.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93634
in International journal of applied Earth observation and geoinformation > vol 82 (October 2019) . - 13 p.[article]Mapping dead forest cover using a deep convolutional neural network and digital aerial photography / Jean-Daniel Sylvain in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)
[article]
Titre : Mapping dead forest cover using a deep convolutional neural network and digital aerial photography Type de document : Article/Communication Auteurs : Jean-Daniel Sylvain, Auteur ; Guillaume Drolet, Auteur ; Nicolas Brown, Auteur Année de publication : 2019 Article en page(s) : pp 14 - 26 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre mort
[Termes IGN] base de données forestières
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] feuillu
[Termes IGN] forêt boréale
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] peuplement mélangé
[Termes IGN] Pinophyta
[Termes IGN] Québec (Canada)
[Termes IGN] santé des forêtsRésumé : (Auteur) Tree mortality is an important forest ecosystem variable having uses in many applications such as forest health assessment, modelling stand dynamics and productivity, or planning wood harvesting operations. Because tree mortality is a spatially and temporally erratic process, rates and spatial patterns of tree mortality are difficult to estimate with traditional inventory methods. Remote sensing imagery has the potential to detect tree mortality at spatial scales required for accurately characterizing this process (e.g., landscape, region). Many efforts have been made in this sense, mostly using pixel- or object-based methods. In this study, we explored the potential of deep Convolutional Neural Networks (CNNs) to detect and map tree health status and functional type over entire regions. To do this, we built a database of around 290,000 photo-interpreted trees that served to extract and label image windows from 20 cm-resolution digital aerial images, for use in CNN training and evaluation. In this process, we also evaluated the effect of window size and spectral channel selection on classification accuracy, and we assessed if multiple realizations of a CNN, generated using different weight initializations, can be aggregated to provide more robust predictions. Finally, we extended our model with 5 additional classes to account for the diversity of landcovers found in our study area. When predicting tree health status only (live or dead), we obtained test accuracies of up to 94%, and up to 86% when predicting functional type only (broadleaf or needleleaf). Channel selection had a limited impact on overall classification accuracy, while window size increased the ability of the CNNs to predict plant functional type. The aggregation of multiple realizations of a CNN allowed us to avoid the selection of suboptimal models and help to remove much of the speckle effect when predicting on new aerial images. Test accuracies of plant functional type and health status were not affected in the extended model and were all above 95% for the 5 extra classes. Our results demonstrate the robustness of the CNN for between-scene variations in aerial photography and also suggest that this approach can be applied at operational level to map tree mortality across extensive territories. Numéro de notice : A2019-316 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.07.010 Date de publication en ligne : 02/08/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.07.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93353
in ISPRS Journal of photogrammetry and remote sensing > vol 156 (October 2019) . - pp 14 - 26[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019103 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019102 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Scene context-driven vehicle detection in high-resolution aerial images / Chao Tao in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)PermalinkSpatially constrained regionalization with multilayer perceptron / Michael Govorov in Transactions in GIS, Vol 23 n° 5 (October 2019)PermalinkUsing a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia / Neil Flood in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)PermalinkAddressing overfitting on point cloud classification using Atrous XCRF / Hasan Asy’ari Arief in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)PermalinkDetecting and mapping traffic signs from Google Street View images using deep learning and GIS / Andrew Campbell in Computers, Environment and Urban Systems, vol 77 (september 2019)PermalinkDevelopment and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from UAV-based thermal infrared imagery / Mehdi Khoshboresh Masouleh in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)PermalinkLearning and adapting robust features for satellite image segmentation on heterogeneous data sets / Sina Ghassemi in IEEE Transactions on geoscience and remote sensing, vol 57 n° 9 (September 2019)PermalinkSoil roughness retrieval from TerraSar-X data using neural network and fractal method / Mohammad Maleki in Advances in space research, vol 64 n°5 (1 September 2019)PermalinkImproving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours / David Griffiths in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkLocal climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network / Chunping Qiu in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkPyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkIs deep learning the new agent for map generalization? / Guillaume Touya in International journal of cartography, vol 5 n° 2-3 (July - November 2019)PermalinkSea level prediction in the Yellow Sea from satellite altimetry with a combined least squares-neural network approach / Jian Zhao in Marine geodesy, vol 42 n° 4 (July 2019)PermalinkUsing direct transformation approach as an alternative technique to fuse global digital elevation models with GPS/levelling measurements in Egypt / Hossam Talaat Elshambaky in Journal of applied geodesy, vol 13 n° 3 (July 2019)PermalinkUsing LiDAR-modified topographic wetness index, terrain attributes with leaf area index to improve a single-tree growth model in south-eastern Finland / Cheikh Mohamedou in Forestry, an international journal of forest research, vol 92 n° 3 (July 2019)PermalinkComprehensive evaluation of soil moisture retrieval models under different crop cover types using C-band synthetic aperture radar data / P. Kumar in Geocarto international, vol 34 n° 9 ([15/06/2019])PermalinkAutomatisation du traitement de données "mobile mapping" : extraction d'éléments linéaires et ponctuels / Loïc Elsholz in XYZ, n° 159 (juin 2019)PermalinkCNN-based dense image matching for aerial remote sensing images / Shunping Ji in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkRoofN3D: a database for 3D building reconstruction with deep learning / Andreas Wichmann in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkAutomatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)PermalinkExploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) / Wenzhi Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)PermalinkBIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)PermalinkBIM, SIG et recherche dans le secteur privé / Anonyme in Géomatique expert, n° 127 (avril - mai 2019)PermalinkJournées de la recherche 2019 / Anonyme in Géomatique expert, n° 127 (avril - mai 2019)PermalinkLearning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)PermalinkVehicle detection in aerial images / Michael Ying Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 4 (avril 2019)PermalinkDeep mapping gentrification in a large Canadian city using deep learning and Google Street View / Lazar Ilic in Plos one, vol 14 n° 3 (March 2019)PermalinkDuPLO: A DUal view Point deep Learning architecture for time series classificatiOn / Roberto Interdonato in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)PermalinkLearning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkSemantic understanding of scenes through the ADE20K dataset / Bolei Zhou in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkLearning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery / Lichao Mou in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)PermalinkAdvanced Remote Sensing Technology for Synthetic Aperture Radar Applications, Tsunami Disasters, and Infrastructure / Maged Marghany (2019)PermalinkAnalyse d’images par méthode de Deep Learning appliquée au contexte routier en conditions météorologiques dégradées / Khouloud Dahmane (2019)PermalinkPermalinkPermalinkChallenges in grassland mowing event detection with multimodal Sentinel images / Anatol Garioud (2019)PermalinkChallenging deep image descriptors for retrieval in heterogeneous iconographic collections / Dimitri Gominski (2019)PermalinkCorrecting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkDataPink, l'IA au service de l'information géographique / Anonyme in Géomatique expert, n° 126 (janvier - février 2019)PermalinkDétection de fenêtres dans un nuage de points de façade et positionnement semi-automatique dans un logiciel BIM / Julie Thierry (2019)PermalinkEnrichissement d'orthophotographie par des données OpenStreetMap pour l'apprentissage machine / Gauthier Fillières-Riveau (2019)PermalinkPermalinkEstimation de profondeur à partir d'images monoculaires par apprentissage profond / Michel Moukari (2019)PermalinkEvaluating SAR-optical sensor fusion for aboveground biomass estimation in a Brazilian tropical forest / Aline Bernarda Debastiani in Annals of forest research, vol 62 n° 1 (January - June 2019)PermalinkPermalink