Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (647)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Unsupervised multi-level feature extraction for improvement of hyperspectral classification / Qiaoqiao Sun in Remote sensing, vol 13 n° 8 (April-2 2021)
[article]
Titre : Unsupervised multi-level feature extraction for improvement of hyperspectral classification Type de document : Article/Communication Auteurs : Qiaoqiao Sun, Auteur ; Xuefeng Liu, Auteur ; Salah Bourennane, Auteur Année de publication : 2021 Article en page(s) : n° 1602 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] codage
[Termes IGN] convolution (signal)
[Termes IGN] déconvolution
[Termes IGN] échantillonnage d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] observation multiniveauxRésumé : (auteur) Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features. Numéro de notice : A2021-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs13081602 Date de publication en ligne : 20/04/2021 En ligne : https://doi.org/10.3390/rs13081602 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97628
in Remote sensing > vol 13 n° 8 (April-2 2021) . - n° 1602[article]Automatic atmospheric correction for shortwave hyperspectral remote sensing data using a time-dependent deep neural network / Jian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : Automatic atmospheric correction for shortwave hyperspectral remote sensing data using a time-dependent deep neural network Type de document : Article/Communication Auteurs : Jian Sun, Auteur ; Fangcao Xu, Auteur ; Guido Cervone, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 117 - 131 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction atmosphérique
[Termes IGN] détection de cible
[Termes IGN] image hyperspectrale
[Termes IGN] modèle de transfert radiatif
[Termes IGN] rayonnement solaire
[Termes IGN] réflectivitéRésumé : (auteur) Atmospheric correction is an essential step in hyperspectral imaging and target detection from spectrometer remote sensing data. State-of-the-art atmospheric correction approaches either require extensive filed experiments or prior knowledge of atmospheric characteristics to improve the predicted accuracy, which are computational expensive and unsuitable for real time application. To take full advantages of remote sensing observation in quickly and reliably acquiring data for a large area, an automatic and efficient processing tool is required for atmospheric correction. In this paper, we propose a time-dependent neural network for automatic atmospheric correction and target detection using multi-scan hyperspectral data under different elevation angles. In addition to the total radiance, the collection day and time are also incorporated to improve the time-dependency of the network and represent the seasonal and diurnal characteristics of atmosphere and solar radiation. Results show that the proposed network has the capacity to accurately provide atmospheric characteristics and estimate precise reflectivity spectra with 95,72% averaged accuracy for different materials, including vegetation, sea ice, and ocean. Additional experiments are designed to investigate the network’s temporal dependency and performance on missing data. The error analysis confirms that our proposed network is capable of estimating atmospheric characteristics under both seasonally and diurnally varying environments and handling the influence of missing data. Both the predicted results and error analysis are promising and demonstrate that our network has the ability of providing accurate atmospheric correction and target detection in real time. Numéro de notice : A2021-208 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.02.007 Date de publication en ligne : 24/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.02.007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97186
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 117 - 131[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery / Seth Goodman in Transactions in GIS, Vol 25 n° 2 (April 2021)
[article]
Titre : A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery Type de document : Article/Communication Auteurs : Seth Goodman, Auteur ; Ariel BenYishay, Auteur ; Daniel Runfola, Auteur Année de publication : 2021 Article en page(s) : pp 674 - 691 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] conflit
[Termes IGN] image Landsat-8
[Termes IGN] implémentation (informatique)
[Termes IGN] Nigéria
[Termes IGN] prédiction
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Convolutional neural networks (CNNs) trained with satellite imagery have been successfully used to generate measures of development indicators, such as poverty, in developing nations. This article explores a CNN‐based approach leveraging Landsat 8 imagery to predict locations of conflict‐related deaths. Using Nigeria as a case study, we use the Armed Conflict Location & Event Data (ACLED) dataset to identify locations of conflict events that did or did not result in a death. Imagery for each location is used as an input to train a CNN to distinguish fatal from non‐fatal events. Using 2014 imagery, we are able to predict the result of conflict events in the following year (2015) with 80% accuracy. While our approach does not replace the need for causal studies into the drivers of conflict death, it provides a low‐cost solution to prediction that requires only publicly available imagery to implement. Findings suggest that the information contained in moderate‐resolution imagery can be used to predict the likelihood of a death due to conflict at a given location in Nigeria the following year, and that CNN‐based methods of estimating development‐related indicators may be effective in applications beyond those explored in the literature. Numéro de notice : A2021-361 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12661 Date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1111/tgis.12661 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97625
in Transactions in GIS > Vol 25 n° 2 (April 2021) . - pp 674 - 691[article]A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection / Xi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection Type de document : Article/Communication Auteurs : Xi Wu, Auteur ; Zhenwei Shi, Auteur ; Zhengxia Zou, Auteur Année de publication : 2021 Article en page(s) : pp 87 - 104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] altitude
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection des nuages
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image Gaofen
[Termes IGN] information géographique
[Termes IGN] latitude
[Termes IGN] longitude
[Termes IGN] modèle statistique
[Termes IGN] neige
[Termes IGN] Normalized Difference Snow IndexRésumé : (auteur) Geographic information such as the altitude, latitude, and longitude are common but fundamental meta-records in remote sensing image products. In this paper, it is shown that such a group of records provides important priors for cloud and snow detection in remote sensing imagery. The intuition comes from some common geographical knowledge, where many of them are important but are often overlooked. For example, it is generally known that snow is less likely to exist in low-latitude or low-altitude areas, and clouds in different geographic may have various visual appearances. Previous cloud and snow detection methods simply ignore the use of such information, and perform detection solely based on the image data (band reflectance). Due to the neglect of such priors, most of these methods are difficult to obtain satisfactory performance in complex scenarios (e.g., cloud-snow coexistence). In this paper, a novel neural network called “Geographic Information-driven Network (GeoInfoNet)” is proposed for cloud and snow detection. In addition to the use of the image data, the model integrates the geographic information at both training and detection phases. A “geographic information encoder” is specially designed, which encodes the altitude, latitude, and longitude of imagery to a set of auxiliary maps and then feeds them to the detection network. The proposed network can be trained in an end-to-end fashion with dense robust features extracted and fused. A new dataset called “Levir_CS” for cloud and snow detection is built, which contains 4,168 Gaofen-1 satellite images and corresponding geographical records, and is over 20× larger than other datasets in this field. On “Levir_CS”, experiments show that the method achieves 90.74% intersection over union of cloud and 78.26% intersection over union of snow. It outperforms other state of the art cloud and snow detection methods with a large margin. Feature visualizations also show that the method learns some important priors which is close to the common sense. The proposed dataset and the code of GeoInfoNet are available in https://github.com/permanentCH5/GeoInfoNet. Numéro de notice : A2021-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.023 Date de publication en ligne : 22/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.023 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97187
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 87 - 104[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Graph convolutional networks by architecture search for PolSAR image classification / Hongying Liu in Remote sensing, vol 13 n° 7 (April-1 2021)PermalinkParsing of urban facades from 3D point clouds based on a novel multi-view domain / Wei Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)PermalinkPrecipitable water vapor fusion based on a generalized regression neural network / Bao Zhang in Journal of geodesy, vol 95 n° 4 (April 2021)PermalinkScene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier / Qimin Cheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)PermalinkA shape transformation-based dataset augmentation framework for pedestrian detection / Zhe Chen in International journal of computer vision, vol 129 n° 4 (April 2021)PermalinkUnsupervised pansharpening based on self-attention mechanism / Ying Qu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkApplication of a multi-layer artificial neural network in a 3-D global electron density model using the long-term observations of COSMIC, Fengyun-3C, and Digisonde / Li Wang in Space weather, vol 19 n° 3 (March 2021)PermalinkFeature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)PermalinkA graph-based semi-supervised approach to classification learning in digital geographies / Pengyuan Liu in Computers, Environment and Urban Systems, vol 86 (March 2021)PermalinkGraph convolutional autoencoder model for the shape coding and cognition of buildings in maps / Xiongfeng Yan in International journal of geographical information science IJGIS, vol 35 n° 3 (March 2021)PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkMachine learning in ground motion prediction / Farid Khosravikia in Computers & geosciences, vol 148 (March 2021)PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkRobust unsupervised small area change detection from SAR imagery using deep learning / Xinzheng Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkSuitability assessment of urban land use in Dalian, China using PNN and GIS / Ziqian Kang in Natural Hazards, vol 106 n° 1 (March 2021)PermalinkCoastal water remote sensing from sentinel-2 satellite data using physical, statistical, and neural network retrieval approach / Frank S. Marzano in IEEE Transactions on geoscience and remote sensing, vol 59 n° 2 (February 2021)PermalinkA comparative study of heterogeneous ensemble-learning techniques for landslide susceptibility mapping / Zhice Fang in International journal of geographical information science IJGIS, vol 35 n° 2 (February 2021)PermalinkCrop identification by massive processing of multiannual satellite imagery for EU common agriculture policy subsidy control / Adolfo Lozano-Tello in European journal of remote sensing, vol 54 n° 1 (2021)Permalink