Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage non-dirigé
apprentissage non-dirigéVoir aussi |
Documents disponibles dans cette catégorie (65)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A deep translation (GAN) based change detection network for optical and SAR remote sensing images / Xinghua Li in ISPRS Journal of photogrammetry and remote sensing, vol 179 (September 2021)
[article]
Titre : A deep translation (GAN) based change detection network for optical and SAR remote sensing images Type de document : Article/Communication Auteurs : Xinghua Li, Auteur ; Zhengshun Du, Auteur ; Yanyuan Huang, Auteur ; Zhenyu Tan, Auteur Année de publication : 2021 Article en page(s) : pp 14 - 34 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] détection de changement
[Termes IGN] image à très haute résolution
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] méthode robuste
[Termes IGN] polarisation
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal profond
[Termes IGN] zone d'intérêtRésumé : (Editeur) With the development of space-based imaging technology, a larger and larger number of images with different modalities and resolutions are available. The optical images reflect the abundant spectral information and geometric shape of ground objects, whose qualities are degraded easily in poor atmospheric conditions. Although synthetic aperture radar (SAR) images cannot provide the spectral features of the region of interest (ROI), they can capture all-weather and all-time polarization information. In nature, optical and SAR images encapsulate lots of complementary information, which is of great significance for change detection (CD) in poor weather situations. However, due to the difference in imaging mechanisms of optical and SAR images, it is difficult to conduct their CD directly using the traditional difference or ratio algorithms. Most recent CD methods bring image translation to reduce their difference, but the results are obtained by ordinary algebraic methods and threshold segmentation with limited accuracy. Towards this end, this work proposes a deep translation based change detection network (DTCDN) for optical and SAR images. The deep translation firstly maps images from one domain (e.g., optical) to another domain (e.g., SAR) through a cyclic structure into the same feature space. With the similar characteristics after deep translation, they become comparable. Different from most previous researches, the translation results are imported to a supervised CD network that utilizes deep context features to separate the unchanged pixels and changed pixels. In the experiments, the proposed DTCDN was tested on four representative data sets from Gloucester, California, and Shuguang village. Compared with state-of-the-art methods, the effectiveness and robustness of the proposed method were confirmed. Numéro de notice : A2021-574 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.07.007 Date de publication en ligne : 23/07/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.07.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98174
in ISPRS Journal of photogrammetry and remote sensing > vol 179 (September 2021) . - pp 14 - 34[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021091 SL Revue Centre de documentation Revues en salle Disponible 081-2021093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network / Jussi Leinonen in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 9 (September 2021)
[article]
Titre : Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network Type de document : Article/Communication Auteurs : Jussi Leinonen, Auteur ; Daniele Nerini, Auteur ; Alexis Berne, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 7211 - 7223 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] données météorologiques
[Termes IGN] épaisseur de nuage
[Termes IGN] image à basse résolution
[Termes IGN] image GOES
[Termes IGN] modèle atmosphérique
[Termes IGN] précipitation
[Termes IGN] processus stochastique
[Termes IGN] réduction d'échelle
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal convolutif
[Termes IGN] SuisseRésumé : (auteur) Generative adversarial networks (GANs) have been recently adopted for super-resolution, an application closely related to what is referred to as “downscaling” in the atmospheric sciences: improving the spatial resolution of low-resolution images. The ability of conditional GANs to generate an ensemble of solutions for a given input lends itself naturally to stochastic downscaling, but the stochastic nature of GANs is not usually considered in super-resolution applications. Here, we introduce a recurrent, stochastic super-resolution GAN that can generate ensembles of time-evolving high-resolution atmospheric fields for an input consisting of a low-resolution sequence of images of the same field. We test the GAN using two data sets: one consisting of radar-measured precipitation from Switzerland; the other of cloud optical thickness derived from the Geostationary Earth Observing Satellite 16 (GOES-16). We find that the GAN can generate realistic, temporally consistent super-resolution sequences for both data sets. The statistical properties of the generated ensemble are analyzed using rank statistics, a method adapted from ensemble weather forecasting; these analyses indicate that the GAN produces close to the correct amount of variability in its outputs. As the GAN generator is fully convolutional, it can be applied after training to input images larger than the images used to train it. It is also able to generate time series much longer than the training sequences, as demonstrated by applying the generator to a three-month data set of the precipitation radar data. The source code to our GAN is available at https://github.com/jleinonen/downscaling-rnn-gan. Numéro de notice : A2021-645 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3032790 Date de publication en ligne : 02/11/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3032790 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98349
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 9 (September 2021) . - pp 7211 - 7223[article]Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning / Xin Jiang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning Type de document : Article/Communication Auteurs : Xin Jiang, Auteur ; Shijing Liang, Auteur ; Xinyue He, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 36 - 50 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] cartographie des risques
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] Fleuve bleu (Chine)
[Termes IGN] Google Earth Engine
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] inondation
[Termes IGN] modèle numérique de surface
[Termes IGN] segmentation d'image
[Termes IGN] superpixel
[Termes IGN] surveillance hydrologiqueRésumé : (auteur) Synthetic aperture radar (SAR) has great potential for timely monitoring of flood information as it penetrates the clouds during flood events. Moreover, the proliferation of SAR satellites with high spatial and temporal resolution provides a tremendous opportunity to understand the flood risk and its quick response. However, traditional algorithms to extract flood inundation using SAR often require manual parameter tuning or data annotation, which presents a challenge for the rapid automated mapping of large and complex flooded scenarios. To address this issue, we proposed a segmentation algorithm for automatic flood mapping in near-real-time over vast areas and for all-weather conditions by integrating Sentinel-1 SAR imagery with an unsupervised machine learning approach named Felz-CNN. The algorithm consists of three phases: (i) super-pixel generation; (ii) convolutional neural network-based featurization; (iii) super-pixel aggregation. We evaluated the Felz-CNN algorithm by mapping flood inundation during the Yangtze River flood in 2020, covering a total study area of 1,140,300 km2. When validated on fine-resolution Planet satellite imagery, the algorithm accurately identified flood extent with producer and user accuracy of 93% and 94%, respectively. The results are indicative of the usefulness of our unsupervised approach for the application of flood mapping. Meanwhile, we overlapped the post-disaster inundation map with a 10-m resolution global land cover map (FROM-GLC10) to assess the damages to different land cover types. Of these types, cropland and residential settlements were most severely affected, with inundation areas of 9,430.36 km2 and 1,397.50 km2, respectively, results that are in agreement with statistics from relevant agencies. Compared with traditional supervised classification algorithms that require time-consuming data annotation, our unsupervised algorithm can be deployed directly to high-performance computing platforms such as Google Earth Engine and PIE-Engine to generate a large-spatial map of flood-affected areas within minutes, without time-consuming data downloading and processing. Importantly, this efficiency enables the fast and effective monitoring of flood conditions to aid in disaster governance and mitigation globally. Numéro de notice : A2021-560 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.05.019 Date de publication en ligne : 09/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.05.019 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98118
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 36 - 50[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Unsupervised denoising for satellite imagery using wavelet directional cycleGAN / Shaoyang Kong in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : Unsupervised denoising for satellite imagery using wavelet directional cycleGAN Type de document : Article/Communication Auteurs : Shaoyang Kong, Auteur ; Cheng Hu, Auteur ; Rui Wang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6573 - 6585 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] filtrage du bruit
[Termes IGN] image radar
[Termes IGN] Insecta
[Termes IGN] polarimétrie radar
[Termes IGN] réseau antagoniste génératif
[Termes IGN] transformation en ondelettesRésumé : (auteur) The measurement of insect radar cross section (RCS) is a prerequisite for the studies such as the quantitative estimation of insect population density and the identification of insects using entomological radar. In this article, we established a multiband polarimetric RCS measurement system in the microwave anechoic chamber. The targets’ range profile at different frequencies can be obtained based on the step frequency continuous wave, and meanwhile the clutter elimination and polarimetric calibration were applied to reduce the measuring error. The multifrequency (X-/Ku-/Ka-bands) polarimetric RCSs of 169 insects belonging to 21 species were measured and reported, which is the first time to systematically present the multifrequency polarimetric RCSs of insects. The mass of all specimens range from 25.6 to 964 mg, and their ventral-aspect RCSs range from −57.47 to −32.17 dBsm at X-band, from −48.27 to −33.87 dBsm at Ku-band and from −69.76 to −36.40 dBsm at Ka-band. For small insects less than 300 mg, the HH polarization RCS increases rapidly with frequency at X-band and fluctuates with the frequency at Ku-band, while the VV polarization RCS increases monotonically with frequency at X- and Ku-band. For larger insects, the HH polarization RCS decreased slowly with frequency at X-band and fluctuates with the frequency at Ku-band, while the VV polarization RCS increases with the frequency, then reaches the maximum, finally fluctuates with the frequency. At Ka-band, the measured polarization RCS versus frequency curves are smooth and all show similar variation. The measurement results verify the effectiveness and accuracy of the established system. Numéro de notice : A2021-631 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3025601 Date de publication en ligne : 08/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3025601 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98281
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6573 - 6585[article]A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases / Chun Yang in ISPRS Journal of photogrammetry and remote sensing, vol 177 (July 2021)
[article]
Titre : A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases Type de document : Article/Communication Auteurs : Chun Yang, Auteur ; Franz Rottensteiner, Auteur ; Christian Heipke, Auteur Année de publication : 2021 Article en page(s) : pp 38 - 56 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Bases de données localisées
[Termes IGN] Allemagne
[Termes IGN] apprentissage profond
[Termes IGN] approche hiérarchique
[Termes IGN] classification automatique d'objets
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image aérienne
[Termes IGN] jointure
[Termes IGN] objet géographique
[Termes IGN] occupation du sol
[Termes IGN] optimisation (mathématiques)
[Termes IGN] utilisation du solRésumé : (Auteur) Land use as contained in geospatial databases constitutes an essential input for different applications such as urban management, regional planning and environmental monitoring. In this paper, a hierarchical deep learning framework is proposed to verify the land use information. For this purpose, a two-step strategy is applied. First, given high-resolution aerial images, the land cover information is determined. To achieve this, an encoder-decoder based convolutional neural network (CNN) is proposed. Second, the pixel-wise land cover information along with the aerial images serves as input for another CNN to classify land use. Because the object catalogue of geospatial databases is frequently constructed in a hierarchical manner, we propose a new CNN-based method aiming to predict land use in multiple levels hierarchically and simultaneously. A so called Joint Optimization (JO) is proposed where predictions are made by selecting the hierarchical tuple over all levels which has the maximum joint class scores, providing consistent results across the different levels. The conducted experiments show that the CNN relying on JO outperforms previous results, achieving an overall accuracy up to 92.5%. In addition to the individual experiments on two test sites, we investigate whether data showing different characteristics can improve the results of land cover and land use classification, when processed together. To do so, we combine the two datasets and undertake some additional experiments. The results show that adding more data helps both land cover and land use classification, especially the identification of underrepresented categories, despite their different characteristics. Numéro de notice : A2021-370 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.04.022 Date de publication en ligne : 13/05/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.04.022 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97774
in ISPRS Journal of photogrammetry and remote sensing > vol 177 (July 2021) . - pp 38 - 56[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021071 SL Revue Centre de documentation Revues en salle Disponible 081-2021073 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021072 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Improving human mobility identification with trajectory augmentation / Fan Zhou in Geoinformatica, vol 25 n° 3 (July 2021)PermalinkMultisensor data fusion for cloud removal in global and all-season Sentinel-2 imagery / Patrick Ebel in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkRemote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)PermalinkSemiCDNet: A semisupervised convolutional neural network for change detection in high resolution remote-sensing images / Daifeng Peng in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkDirect analysis in real-time (DART) time-of-flight mass spectrometry (TOFMS) of wood reveals distinct chemical signatures of two species of Afzelia / Peter Kitin in Annals of Forest Science, vol 78 n° 2 (June 2021)PermalinkSemantic hierarchy emerges in deep generative representations for scene synthesis / Ceyuan Yang in International journal of computer vision, vol 129 n° 5 (May 2021)PermalinkGraph convolutional autoencoder model for the shape coding and cognition of buildings in maps / Xiongfeng Yan in International journal of geographical information science IJGIS, vol 35 n° 3 (March 2021)PermalinkAmélioration des résolutions spatiale et spectrale d’images satellitaires par réseaux antagonistes / Anaïs Gastineau (2021)PermalinkPermalinkPermalink