Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (647)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A deep learning approach to improve the retrieval of temperature and humidity profiles from a ground-based microwave radiometer / Xing Yan in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
[article]
Titre : A deep learning approach to improve the retrieval of temperature and humidity profiles from a ground-based microwave radiometer Type de document : Article/Communication Auteurs : Xing Yan, Auteur ; Chen Liang, Auteur ; Yize Jiang, Auteur Année de publication : 2020 Article en page(s) : pp 8427 - 8437 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse diachronique
[Termes IGN] apprentissage profond
[Termes IGN] changement climatique
[Termes IGN] classification par réseau neuronal
[Termes IGN] humidité du sol
[Termes IGN] modèle atmosphérique
[Termes IGN] radiomètre à hyperfréquence
[Termes IGN] température au solRésumé : (auteur) The ground-based microwave radiometer (MWR) retrieves atmospheric profiles with a high temporal resolution for temperature and humidity up to a height of 10 km. Such profiles are critical for understanding the evolution of climate systems. To improve the accuracy of profile retrieval in MWR, we developed a deep learning approach called batch normalization and robust neural network (BRNN). In contrast to the traditional backpropagation neural network (BPNN), which has previously been applied for MWR profile retrieval, BRNN reduces overfitting and has a greater capacity to describe nonlinear relationships between MWR measurements and atmospheric structure information. Validation of BRNN with the radiosonde demonstrates a good retrieval capability, showing a root-mean-square error of 1.70 K for temperature, 11.72% for relative humidity (RH), and 0.256 g/m 3 for water vapor density. A detailed comparison with various inversion methods (BPNN, extreme gradient boosting, support vector machine, ridge regression, and random forest) has also been conducted in this research, using the same training and test data sets. From the comparison, we demonstrated that BRNN significantly improves retrieval accuracy, particularly for the retrieval of temperature and RH near the surface. Numéro de notice : A2020-741 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2987896 Date de publication en ligne : 29/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2987896 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96371
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8427 - 8437[article]Deep learning for detecting and classifying ocean objects: application of YoloV3 for iceberg–ship discrimination / Frederik Hass in ISPRS International journal of geo-information, vol 9 n° 12 (December 2020)
[article]
Titre : Deep learning for detecting and classifying ocean objects: application of YoloV3 for iceberg–ship discrimination Type de document : Article/Communication Auteurs : Frederik Hass, Auteur ; Jamal Jokar Arsanjani, Auteur Année de publication : 2020 Article en page(s) : n° 758 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] Groenland
[Termes IGN] hydrocarbure
[Termes IGN] iceberg
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] navire
[Termes IGN] océan
[Termes IGN] seuillage d'image
[Termes IGN] trafic maritimeRésumé : (auteur) Synthetic aperture radar (SAR) plays a remarkable role in ocean surveillance, with capabilities of detecting oil spills, icebergs, and marine traffic both at daytime and at night, regardless of clouds and extreme weather conditions. The detection of ocean objects using SAR relies on well-established methods, mostly adaptive thresholding algorithms. In most waters, the dominant ocean objects are ships, whereas in arctic waters the vast majority of objects are icebergs drifting in the ocean and can be mistaken for ships in terms of navigation and ocean surveillance. Since these objects can look very much alike in SAR images, the determination of what objects actually are still relies on manual detection and human interpretation. With the increasing interest in the arctic regions for marine transportation, it is crucial to develop novel approaches for automatic monitoring of the traffic in these waters with satellite data. Hence, this study aims at proposing a deep learning model based on YoloV3 for discriminating icebergs and ships, which could be used for mapping ocean objects ahead of a journey. Using dual-polarization Sentinel-1 data, we pilot-tested our approach on a case study in Greenland. Our findings reveal that our approach is capable of training a deep learning model with reliable detection accuracy. Our methodical approach along with the choice of data and classifiers can be of great importance to climate change researchers, shipping industries and biodiversity analysts. The main difficulties were faced in the creation of training data in the Arctic waters and we concluded that future work must focus on issues regarding training data. Numéro de notice : A2020-808 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9120758 Date de publication en ligne : 19/12/2020 En ligne : https://doi.org/10.3390/ijgi9120758 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96953
in ISPRS International journal of geo-information > vol 9 n° 12 (December 2020) . - n° 758[article]Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks / Felix Schiefer in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
[article]
Titre : Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks Type de document : Article/Communication Auteurs : Felix Schiefer, Auteur ; Teja Kattenborn, Auteur ; Annett Frick, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 205-215 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] apprentissage profond
[Termes IGN] arbre (flore)
[Termes IGN] carte forestière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] espèce végétale
[Termes IGN] Forêt-Noire, massif de la
[Termes IGN] image à haute résolution
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] inventaire forestier local
[Termes IGN] segmentation sémantique
[Vedettes matières IGN] Inventaire forestierRésumé : (Auteur) The use of unmanned aerial vehicles (UAVs) in vegetation remote sensing allows a time-flexible and cost-effective acquisition of very high-resolution imagery. Still, current methods for the mapping of forest tree species do not exploit the respective, rich spatial information. Here, we assessed the potential of convolutional neural networks (CNNs) and very high-resolution RGB imagery from UAVs for the mapping of tree species in temperate forests. We used multicopter UAVs to obtain very high-resolution ( Numéro de notice : A2020-706 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.10.015 Date de publication en ligne : 03/11/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.10.015 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96236
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 205-215[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2020121 RAB Revue Centre de documentation En réserve L003 Disponible MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds / Haifeng Luo in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
[article]
Titre : MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds Type de document : Article/Communication Auteurs : Haifeng Luo, Auteur ; Chongcheng Chen, Auteur ; Lina Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8301 - 8315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] cognition
[Termes IGN] données lidar
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] représentation multiple
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet . Numéro de notice : A2020-738 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2985695 Date de publication en ligne : 28/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2985695 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96363
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8301 - 8315[article]Nonlocal graph convolutional networks for hyperspectral image classification / Lichao Mou in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
[article]
Titre : Nonlocal graph convolutional networks for hyperspectral image classification Type de document : Article/Communication Auteurs : Lichao Mou, Auteur ; Xiaoqiang Lu, Auteur ; Xuelong Li, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8246 - 8257 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] classification semi-dirigée
[Termes IGN] entropie
[Termes IGN] graphe
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal récurrentRésumé : (auteur) Over the past few years making use of deep networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), classifying hyperspectral images has progressed significantly and gained increasing attention. In spite of being successful, these networks need an adequate supply of labeled training instances for supervised learning, which, however, is quite costly to collect. On the other hand, unlabeled data can be accessed in almost arbitrary amounts. Hence it would be conceptually of great interest to explore networks that are able to exploit labeled and unlabeled data simultaneously for hyperspectral image classification. In this article, we propose a novel graph-based semisupervised network called nonlocal graph convolutional network (nonlocal GCN). Unlike existing CNNs and RNNs that receive pixels or patches of a hyperspectral image as inputs, this network takes the whole image (including both labeled and unlabeled data) in. More specifically, a nonlocal graph is first calculated. Given this graph representation, a couple of graph convolutional layers are used to extract features. Finally, the semisupervised learning of the network is done by using a cross-entropy error over all labeled instances. Note that the nonlocal GCN is end-to-end trainable. We demonstrate in extensive experiments that compared with state-of-the-art spectral classifiers and spectral–spatial classification networks, the nonlocal GCN is able to offer competitive results and high-quality classification maps (with fine boundaries and without noisy scattered points of misclassification). Numéro de notice : A2020-739 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2973363 Date de publication en ligne : 12/05/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2973363 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96365
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8246 - 8257[article]A novel intelligent classification method for urban green space based on high-resolution remote sensing images / Zhiyu Xu in Remote sensing, vol 12 n° 22 (December-1 2020)PermalinkSemantic‐based urban growth prediction / Marvin Mc Cutchan in Transactions in GIS, Vol 24 n° 6 (December 2020)PermalinkUnderstanding the role of individual units in a deep neural network / David Bau in Proceedings of the National Academy of Sciences of the United States of America PNAS, vol 117 n° 48 (1 December 2020)PermalinkUnderstanding the synergies of deep learning and data fusion of multispectral and panchromatic high resolution commercial satellite imagery for automated ice-wedge polygon detection / Chandi Witharana in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)PermalinkUnsupervised deep joint segmentation of multitemporal high-resolution images / Sudipan Saha in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)PermalinkActive and incremental learning for semantic ALS point cloud segmentation / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)PermalinkBayesian-deep-learning estimation of earthquake location from single-station observations / S. Mostafa Mousavi in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkBayesian transfer learning for object detection in optical remote sensing images / Changsheng Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkA deep learning framework for matching of SAR and optical imagery / Lloyd Haydn Hughes in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)PermalinkHigh-resolution remote sensing image scene classification via key filter bank based on convolutional neural network / Fengpeng Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkLearning-based hyperspectral imagery compression through generative neural networks / Chubo Deng in Remote sensing, vol 12 n° 21 (November 2020)PermalinkRiver ice segmentation with deep learning / Abhineet Singh in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkSea surface temperature and high water temperature occurrence prediction using a long short-term memory model / Minkyu Kim in Remote sensing, vol 12 n° 21 (November 2020)PermalinkThe construction of sound speed field based on back propagation neural network in the global ocean / Junting Wang in Marine geodesy, vol 43 n° 6 (November 2020)PermalinkUrban expansion in Auckland, New Zealand: a GIS simulation via an intelligent self-adapting multiscale agent-based model / Tingting Xu in International journal of geographical information science IJGIS, vol 34 n° 11 (November 2020)PermalinkApplication of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data / Mahdi Moalla in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)PermalinkChoosing an appropriate training set size when using existing data to train neural networks for land cover segmentation / Huan Ning in Annals of GIS, vol 26 n° 4 (October 2020)PermalinkCompensation of geometric parameter errors for terrestrial laser scanner by integrating intensity correction / Wanli Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)PermalinkExploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)PermalinkGround-based remote sensing of forests exploiting GNSS signals / Leila Guerriero in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)Permalink