Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (647)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Development and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from UAV-based thermal infrared imagery / Mehdi Khoshboresh Masouleh in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)
[article]
Titre : Development and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from UAV-based thermal infrared imagery Type de document : Article/Communication Auteurs : Mehdi Khoshboresh Masouleh, Auteur ; Reza Shah-Hosseini, Auteur Année de publication : 2019 Article en page(s) : pp 172 - 186 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] image thermique
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] véhicule automobileRésumé : (Auteur) Real-time unmanned aerial vehicles (UAVs)-based thermal infrared images processing, due to high spatial resolution and knowledge of the various infrared radiant energy level distribution of solid bodies, has important applications such as monitoring and control of the various phenomena in different natural situations. One of these applications is monitoring the ground vehicles in cities by using detection or semantic segmentation of them in the thermal images. In this research, our purpose is to improve the performance of deep learning combined model by using Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) specifications for the segmentation of the ground vehicles from UAV-based thermal infrared imagery. The proposed model is studied in three steps. First, designing the proposed model by using an encoder-decoder structure and addition of extracted features from convolutional layers and restricted Boltzmann machine in the network. Second, the implementation of the research goals on four sets of UAV-based thermal infrared imagery named NPU_CS_UAV_IR_DATA that was collected from some streets of China by using FLIR TAU2 thermal infrared sensor in 2017. Finally, analyzing the performance of the proposed model by using five state-of-the-art models in semantic segmentation. The results evaluated the performance of the proposed model as a robust model with the average precision and average processing time of approximately 0.97, and 19.73 s for all datasets, respectively. Numéro de notice : A2019-315 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2019.07.009 Date de publication en ligne : 25/07/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.07.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93341
in ISPRS Journal of photogrammetry and remote sensing > vol 155 (September 2019) . - pp 172 - 186[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning and adapting robust features for satellite image segmentation on heterogeneous data sets / Sina Ghassemi in IEEE Transactions on geoscience and remote sensing, vol 57 n° 9 (September 2019)
[article]
Titre : Learning and adapting robust features for satellite image segmentation on heterogeneous data sets Type de document : Article/Communication Auteurs : Sina Ghassemi, Auteur ; Attilio Friandrotti, Auteur ; Gianluca Francini, Auteur ; Enrico Magli, Auteur Année de publication : 2019 Article en page(s) : pp 6517 - 6529 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] coût
[Termes IGN] données hétérogènes
[Termes IGN] image binaire
[Termes IGN] image satellite
[Termes IGN] méthode robuste
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation binaire
[Termes IGN] segmentation d'image
[Termes IGN] test de performanceRésumé : (auteur) This paper addresses the problem of training a deep neural network for satellite image segmentation so that it can be deployed over images whose statistics differ from those used for training. For example, in postdisaster damage assessment, the tight time constraints make it impractical to train a network from scratch for each image to be segmented. We propose a convolutional encoder–decoder network able to learn visual representations of increasing semantic level as its depth increases, allowing it to generalize over a wider range of satellite images. Then, we propose two additional methods to improve the network performance over each specific image to be segmented. First, we observe that updating the batch normalization layers’ statistics over the target image improves the network performance without human intervention. Second, we show that refining a trained network over a few samples of the image boosts the network performance with minimal human intervention. We evaluate our architecture over three data sets of satellite images, showing the state-of-the-art performance in binary segmentation of previously unseen images and competitive performance with respect to more complex techniques in a multiclass segmentation task. Numéro de notice : A2019-341 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2906689 Date de publication en ligne : 17/04/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2906689 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93379
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 9 (September 2019) . - pp 6517 - 6529[article]Soil roughness retrieval from TerraSar-X data using neural network and fractal method / Mohammad Maleki in Advances in space research, vol 64 n°5 (1 September 2019)
[article]
Titre : Soil roughness retrieval from TerraSar-X data using neural network and fractal method Type de document : Article/Communication Auteurs : Mohammad Maleki, Auteur ; Jalal Amini, Auteur ; Claudia Notarnicola, Auteur Année de publication : 2019 Article en page(s) : pp 1117-1129 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] analyse fractale
[Termes IGN] bande X
[Termes IGN] équation intégrale
[Termes IGN] image TerraSAR-X
[Termes IGN] modèle d'inversion
[Termes IGN] modèle numérique de terrain
[Termes IGN] Perceptron multicouche
[Termes IGN] polarimétrie radar
[Termes IGN] rugosité du solRésumé : (auteur) The purpose of this study is to estimate the surface roughness (rms) using TerraSar-X data in HH polarization. Simulation of data is carried out at a wide range of moisture and roughness using the Integral Equation Model (IEM). The inversion method is based on Multi-Layer Perceptron neural network. Inversion technique is performed in two steps. In the first step, the neural network is trained using synthetic data. The inputs of the first neural network are the backscattering coefficient and incidence angle, and the moisture is the output. In the next step, three neural networks are built based on a prior and without prior information on roughness. The inputs of three neural network are backscattering coefficient, estimated moisture in the first step and incidence angle and the roughness is output. The validation of the proposed methods is carried out based on synthetic and real data. Ground roughness measurements are extracted from Digital Terrain Model (DTM) using the fractal method. The accuracy of moisture from synthetic data is 6.1 vol% without prior information on moisture and roughness. The roughness (rms) accuracy of synthetic datasets is 0. 61 cm without prior information and is 0.31 cm and 0.38 cm for rms lower than 2 cm and rms between 2 and 4 cm, with prior information on roughness. The result's analysis of the simulated data showed that the prior information on roughness strongly improves the accuracy of roughness and moisture estimates. The accuracy of rms estimates for the TerraSar-X image in the HH polarization is about 0.9 cm in the case of no prior information on roughness. The accuracy improves to 0.57 cm for rms lower than 2 cm and 0.54 cm for rms between 2 and 4 cm with prior information on roughness. An overestimation of rms for rms lower than 2 cm and an underestimation of rms for rms higher than 2 cm are observed. The results of the accuracy of the synthetic and real data showed that the X band in HH polarization has a very good potential to estimate the soil roughness. Numéro de notice : A2019-411 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.asr.2019.04.019 Date de publication en ligne : 24/04/2019 En ligne : https://doi.org/10.1016/j.asr.2019.04.019 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93527
in Advances in space research > vol 64 n°5 (1 September 2019) . - pp 1117-1129[article]Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours / David Griffiths in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours Type de document : Article/Communication Auteurs : David Griffiths, Auteur ; Jan Böhm , Auteur Année de publication : 2019 Article en page(s) : pp 70 - 83 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] données publiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] Royaume-Uni
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone ruraleRésumé : (Auteur) Robust and reliable automatic building detection and segmentation from aerial images/point clouds has been a prominent field of research in remote sensing, computer vision and point cloud processing for a number of decades. One of the largest issues associated with deep learning methods is the high quantity of data required for training. To help address this we present a method to improve public GIS building footprint labels by using Morphological Geodesic Active Contours (MorphGACs). We demonstrate by improving the quality of building footprint labels for detection and semantic segmentation, more robust and reliable models can be obtained. We evaluate these methods over a large UK-based dataset of 24556 images containing 169835 building instances. This is achieved by training several Mask/Faster R-CNN and RetinaNet deep convolutional neural networks. Networks are supplied with both RGB and fused RGB-lidar data. We offer quantitative analysis on the benefits of the inclusion of depth data for building segmentation. By employing both methods we achieve a detection accuracy of 0.92 (mAP@0.5) and segmentation f1 scores of 0.94 over a 4911 test images ranging from urban to rural scenes. Numéro de notice : A2019-265 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.05.013 Date de publication en ligne : 06/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.05.013 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93079
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 70 - 83[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network / Chunping Qiu in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network Type de document : Article/Communication Auteurs : Chunping Qiu, Auteur ; Lichao Mou, Auteur ; Michael Schmitt, Auteur ; Xiao Xiang Zhu, Auteur Année de publication : 2019 Article en page(s) : pp 151 - 162 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] climat urbain
[Termes IGN] image multitemporelle
[Termes IGN] image optique
[Termes IGN] image Sentinel-MSI
[Termes IGN] occupation du sol
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal récurrent
[Termes IGN] résidu
[Termes IGN] villeRésumé : (Auteur) The local climate zone (LCZ) scheme was originally proposed to provide an interdisciplinary taxonomy for urban heat island (UHI) studies. In recent years, the scheme has also become a starting point for the development of higher-level products, as the LCZ classes can help provide a generalized understanding of urban structures and land uses. LCZ mapping can therefore theoretically aid in fostering a better understanding of spatio-temporal dynamics of cities on a global scale. However, reliable LCZ maps are not yet available globally. As a first step toward automatic LCZ mapping, this work focuses on LCZ-derived land cover classification, using multi-seasonal Sentinel-2 images. We propose a recurrent residual network (Re-ResNet) architecture that is capable of learning a joint spectral-spatial-temporal feature representation within a unitized framework. To this end, a residual convolutional neural network (ResNet) and a recurrent neural network (RNN) are combined into one end-to-end architecture. The ResNet is able to learn rich spectral-spatial feature representations from single-seasonal imagery, while the RNN can effectively analyze temporal dependencies of multi-seasonal imagery. Cross validations were carried out on a diverse dataset covering seven distinct European cities, and a quantitative analysis of the experimental results revealed that the combined use of the multi-temporal information and Re-ResNet results in an improvement of approximately 7 percent points in overall accuracy. The proposed framework has the potential to produce consistent-quality urban land cover and LCZ maps on a large scale, to support scientific progress in fields such as urban geography and urban climatology. Numéro de notice : A2019-268 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.05.004 Date de publication en ligne : 14/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.05.004 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93085
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 151 - 162[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Pyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkIs deep learning the new agent for map generalization? / Guillaume Touya in International journal of cartography, vol 5 n° 2-3 (July - November 2019)PermalinkSea level prediction in the Yellow Sea from satellite altimetry with a combined least squares-neural network approach / Jian Zhao in Marine geodesy, vol 42 n° 4 (July 2019)PermalinkUsing direct transformation approach as an alternative technique to fuse global digital elevation models with GPS/levelling measurements in Egypt / Hossam Talaat Elshambaky in Journal of applied geodesy, vol 13 n° 3 (July 2019)PermalinkUsing LiDAR-modified topographic wetness index, terrain attributes with leaf area index to improve a single-tree growth model in south-eastern Finland / Cheikh Mohamedou in Forestry, an international journal of forest research, vol 92 n° 3 (July 2019)PermalinkComprehensive evaluation of soil moisture retrieval models under different crop cover types using C-band synthetic aperture radar data / P. Kumar in Geocarto international, vol 34 n° 9 ([15/06/2019])PermalinkAutomatisation du traitement de données "mobile mapping" : extraction d'éléments linéaires et ponctuels / Loïc Elsholz in XYZ, n° 159 (juin 2019)PermalinkCNN-based dense image matching for aerial remote sensing images / Shunping Ji in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkRoofN3D: a database for 3D building reconstruction with deep learning / Andreas Wichmann in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkAutomatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)Permalink