Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond > réseau neuronal artificiel > réseau neuronal convolutif
réseau neuronal convolutif |
Documents disponibles dans cette catégorie (81)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Addressing overfitting on point cloud classification using Atrous XCRF / Hasan Asy’ari Arief in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)
[article]
Titre : Addressing overfitting on point cloud classification using Atrous XCRF Type de document : Article/Communication Auteurs : Hasan Asy’ari Arief, Auteur ; Ulf Geir Indahl, Auteur ; Geir-Harald Strand, Auteur ; Håvard Tveite, Auteur Année de publication : 2019 Article en page(s) : pp 90 - 101 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] classification automatique
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal profond
[Termes IGN] semis de pointsRésumé : (Auteur) Advances in techniques for automated classification of point cloud data introduce great opportunities for many new and existing applications. However, with a limited number of labelled points, automated classification by a machine learning model is prone to overfitting and poor generalization. The present paper addresses this problem by inducing controlled noise (on a trained model) generated by invoking conditional random field similarity penalties using nearby features. The method is called Atrous XCRF and works by forcing a trained model to respect the similarity penalties provided by unlabeled data. In a benchmark study carried out using the ISPRS 3D labeling dataset, our technique achieves 85.0% in term of overall accuracy, and 71.1% in term of F1 score. The result is on par with the current best model for the benchmark dataset and has the highest value in term of F1 score. Additionally, transfer learning using the Bergen 2018 dataset, without model retraining, was also performed. Even though our proposal provides a consistent 3% improvement in term of accuracy, more work still needs to be done to alleviate the generalization problem on the domain adaptation and the transfer learning field. Numéro de notice : A2019-312 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2019.07.002 Date de publication en ligne : 11/07/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.07.002 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93337
in ISPRS Journal of photogrammetry and remote sensing > vol 155 (September 2019) . - pp 90 - 101[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning and adapting robust features for satellite image segmentation on heterogeneous data sets / Sina Ghassemi in IEEE Transactions on geoscience and remote sensing, vol 57 n° 9 (September 2019)
[article]
Titre : Learning and adapting robust features for satellite image segmentation on heterogeneous data sets Type de document : Article/Communication Auteurs : Sina Ghassemi, Auteur ; Attilio Friandrotti, Auteur ; Gianluca Francini, Auteur ; Enrico Magli, Auteur Année de publication : 2019 Article en page(s) : pp 6517 - 6529 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] coût
[Termes IGN] données hétérogènes
[Termes IGN] image binaire
[Termes IGN] image satellite
[Termes IGN] méthode robuste
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation binaire
[Termes IGN] segmentation d'image
[Termes IGN] test de performanceRésumé : (auteur) This paper addresses the problem of training a deep neural network for satellite image segmentation so that it can be deployed over images whose statistics differ from those used for training. For example, in postdisaster damage assessment, the tight time constraints make it impractical to train a network from scratch for each image to be segmented. We propose a convolutional encoder–decoder network able to learn visual representations of increasing semantic level as its depth increases, allowing it to generalize over a wider range of satellite images. Then, we propose two additional methods to improve the network performance over each specific image to be segmented. First, we observe that updating the batch normalization layers’ statistics over the target image improves the network performance without human intervention. Second, we show that refining a trained network over a few samples of the image boosts the network performance with minimal human intervention. We evaluate our architecture over three data sets of satellite images, showing the state-of-the-art performance in binary segmentation of previously unseen images and competitive performance with respect to more complex techniques in a multiclass segmentation task. Numéro de notice : A2019-341 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2906689 Date de publication en ligne : 17/04/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2906689 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93379
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 9 (September 2019) . - pp 6517 - 6529[article]Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network / Chunping Qiu in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network Type de document : Article/Communication Auteurs : Chunping Qiu, Auteur ; Lichao Mou, Auteur ; Michael Schmitt, Auteur ; Xiao Xiang Zhu, Auteur Année de publication : 2019 Article en page(s) : pp 151 - 162 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] climat urbain
[Termes IGN] image multitemporelle
[Termes IGN] image optique
[Termes IGN] image Sentinel-MSI
[Termes IGN] occupation du sol
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal récurrent
[Termes IGN] résidu
[Termes IGN] villeRésumé : (Auteur) The local climate zone (LCZ) scheme was originally proposed to provide an interdisciplinary taxonomy for urban heat island (UHI) studies. In recent years, the scheme has also become a starting point for the development of higher-level products, as the LCZ classes can help provide a generalized understanding of urban structures and land uses. LCZ mapping can therefore theoretically aid in fostering a better understanding of spatio-temporal dynamics of cities on a global scale. However, reliable LCZ maps are not yet available globally. As a first step toward automatic LCZ mapping, this work focuses on LCZ-derived land cover classification, using multi-seasonal Sentinel-2 images. We propose a recurrent residual network (Re-ResNet) architecture that is capable of learning a joint spectral-spatial-temporal feature representation within a unitized framework. To this end, a residual convolutional neural network (ResNet) and a recurrent neural network (RNN) are combined into one end-to-end architecture. The ResNet is able to learn rich spectral-spatial feature representations from single-seasonal imagery, while the RNN can effectively analyze temporal dependencies of multi-seasonal imagery. Cross validations were carried out on a diverse dataset covering seven distinct European cities, and a quantitative analysis of the experimental results revealed that the combined use of the multi-temporal information and Re-ResNet results in an improvement of approximately 7 percent points in overall accuracy. The proposed framework has the potential to produce consistent-quality urban land cover and LCZ maps on a large scale, to support scientific progress in fields such as urban geography and urban climatology. Numéro de notice : A2019-268 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.05.004 Date de publication en ligne : 14/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.05.004 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93085
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 151 - 162[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt CNN-based dense image matching for aerial remote sensing images / Shunping Ji in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
[article]
Titre : CNN-based dense image matching for aerial remote sensing images Type de document : Article/Communication Auteurs : Shunping Ji, Auteur ; Jin Liu, Auteur ; Meng Lu, Auteur Année de publication : 2019 Article en page(s) : pp 415 - 424 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] couple stéréoscopique
[Termes IGN] image aérienne
[Termes IGN] Munich
[Termes IGN] réseau neuronal convolutif
[Termes IGN] Stuttgart
[Termes IGN] ville
[Termes IGN] zone urbaineRésumé : (Auteur) Dense stereo matching plays a key role in 3D reconstruction. The capability of using deep learning in the stereo matching of remote sensing data is currently uncertain. This article investigated the application of deep learning–based stereo methods in aerial image series and proposed a deep learning–based multi-view dense matching framework. First, we applied three typical convolutional neural network models, MC-CNN, GC-Net, and DispNet, to aerial stereo pairs and compared the results with those of the SGM and a commercial software, SURE. Second, on different data sets, the generalization ability of each network is evaluated by using direct transfer learning with models pretrained on other data sets and by fine-tuning with a small number of target training data. Third, we present a deep learning–based multi-view dense matching framework where the multi-view geometry is introduced to further refine matching results. Three sets of aerial images as the main data sets and two open-source sets of street images as auxiliary data sets are used for testing. Experiments show that, first, the performance of deep learning–based stereo methods is slightly better than traditional methods. Second, both the GC-Net and the MC-CNN have demonstrated good generalization ability and can obtain satisfactory results on aerial images using a pretrained model on several available stereo benchmarks. Third, multi-view geometry constraints can further improve the performance of deep learning–based methods, which is better than that of the multi-view–based SGM and SURE. Numéro de notice : A2019-246 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.6.415 Date de publication en ligne : 01/06/2019 En ligne : https://doi.org/10.14358/PERS.85.6.415 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93002
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 6 (June 2019) . - pp 415 - 424[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019061 SL Revue Centre de documentation Revues en salle Disponible Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network Type de document : Article/Communication Auteurs : Jianfeng Huang, Auteur ; Xinchang Zhang, Auteur ; Qinchuan Xin, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 91 - 105 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] résidu
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (Auteur) Automated extraction of buildings from remotely sensed data is important for a wide range of applications but challenging due to difficulties in extracting semantic features from complex scenes like urban areas. The recently developed fully convolutional neural networks (FCNs) have shown to perform well on urban object extraction because of the outstanding feature learning and end-to-end pixel labeling abilities. The commonly used feature fusion or skip-connection refine modules of FCNs often overlook the problem of feature selection and could reduce the learning efficiency of the networks. In this paper, we develop an end-to-end trainable gated residual refinement network (GRRNet) that fuses high-resolution aerial images and LiDAR point clouds for building extraction. The modified residual learning network is applied as the encoder part of GRRNet to learn multi-level features from the fusion data and a gated feature labeling (GFL) unit is introduced to reduce unnecessary feature transmission and refine classification results. The proposed model - GRRNet is tested in a publicly available dataset with urban and suburban scenes. Comparison results illustrated that GRRNet has competitive building extraction performance in comparison with other approaches. The source code of the developed GRRNet is made publicly available for studies. Numéro de notice : A2019-206 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.019 Date de publication en ligne : 20/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92669
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 91 - 105[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)PermalinkJournées de la recherche 2019 / Anonyme in Géomatique expert, n° 127 (avril - mai 2019)PermalinkLearning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)PermalinkVehicle detection in aerial images / Michael Ying Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 4 (avril 2019)PermalinkDuPLO: A DUal view Point deep Learning architecture for time series classificatiOn / Roberto Interdonato in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)PermalinkLearning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkCorrecting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkEvaluating SAR-optical sensor fusion for aboveground biomass estimation in a Brazilian tropical forest / Aline Bernarda Debastiani in Annals of forest research, vol 62 n° 1 (January - June 2019)PermalinkPermalinkPermalink