Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage non-dirigé > réseau antagoniste génératif
réseau antagoniste génératif |
Documents disponibles dans cette catégorie (30)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Unsupervised denoising for satellite imagery using wavelet directional cycleGAN / Shaoyang Kong in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : Unsupervised denoising for satellite imagery using wavelet directional cycleGAN Type de document : Article/Communication Auteurs : Shaoyang Kong, Auteur ; Cheng Hu, Auteur ; Rui Wang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6573 - 6585 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] filtrage du bruit
[Termes IGN] image radar
[Termes IGN] Insecta
[Termes IGN] polarimétrie radar
[Termes IGN] réseau antagoniste génératif
[Termes IGN] transformation en ondelettesRésumé : (auteur) The measurement of insect radar cross section (RCS) is a prerequisite for the studies such as the quantitative estimation of insect population density and the identification of insects using entomological radar. In this article, we established a multiband polarimetric RCS measurement system in the microwave anechoic chamber. The targets’ range profile at different frequencies can be obtained based on the step frequency continuous wave, and meanwhile the clutter elimination and polarimetric calibration were applied to reduce the measuring error. The multifrequency (X-/Ku-/Ka-bands) polarimetric RCSs of 169 insects belonging to 21 species were measured and reported, which is the first time to systematically present the multifrequency polarimetric RCSs of insects. The mass of all specimens range from 25.6 to 964 mg, and their ventral-aspect RCSs range from −57.47 to −32.17 dBsm at X-band, from −48.27 to −33.87 dBsm at Ku-band and from −69.76 to −36.40 dBsm at Ka-band. For small insects less than 300 mg, the HH polarization RCS increases rapidly with frequency at X-band and fluctuates with the frequency at Ku-band, while the VV polarization RCS increases monotonically with frequency at X- and Ku-band. For larger insects, the HH polarization RCS decreased slowly with frequency at X-band and fluctuates with the frequency at Ku-band, while the VV polarization RCS increases with the frequency, then reaches the maximum, finally fluctuates with the frequency. At Ka-band, the measured polarization RCS versus frequency curves are smooth and all show similar variation. The measurement results verify the effectiveness and accuracy of the established system. Numéro de notice : A2021-631 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3025601 Date de publication en ligne : 08/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3025601 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98281
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6573 - 6585[article]Improving human mobility identification with trajectory augmentation / Fan Zhou in Geoinformatica, vol 25 n° 3 (July 2021)
[article]
Titre : Improving human mobility identification with trajectory augmentation Type de document : Article/Communication Auteurs : Fan Zhou, Auteur ; Ruiyang Yin, Auteur ; Goce Trajcevski, Auteur ; Kunpeng Zhang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 453 - 483 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] itinéraire
[Termes IGN] mobilité humaine
[Termes IGN] modèle numérique de déplacement
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal récurrent
[Termes IGN] utilisateurRésumé : (auteur) Many location-based social networks (LBSNs) applications such as customized Point-Of-Interest (POI) recommendation, preference-based trip planning, travel time estimation, etc., involve an important task of understanding human trajectory patterns. In particular, identifying and linking trajectories to users who generate them – a problem called Trajectory-User Linking (TUL) – has become a focus of many recent works. TUL is usually studied as a multi-class classification problem and has gained recent attention because: (1) the number of labels/classes (i.e., users) is way larger than the number of motion patterns among various trajectories; and (2) the location-based trajectory data, especially the check-ins – i.e., events of reporting a location at particular Point of Interest (POI) with known semantics – are often extremely sparse. Towards addressing these challenges, we introduce a Trajectory Generative Adversarial Network (TGAN) as an approach to enable learning users motion patterns and location distribution, and to eventually identify human mobility. TGAN consists of two jointly trained neural networks, playing a Minimax game to (iteratively) optimize both components. The first one is the generator, learning trajectory representation by a Recurrent Neural Network (RNN) based model, aiming at fitting the underlying trajectory distribution of a particular individual and generate synthetic trajectories with intrinsic invariance and global coherence. The second one is the discriminator – a Convolutional Neural Network (CNN) based model that discriminates the generated trajectory from the real ones and provides guidance to train the generator model. We demonstrate that the above two models can be well tuned together to improve the TUL performance, while achieving superior accuracy when compared to existing approaches. Numéro de notice : A2021-972 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1007/s10707-019-00378-7 Date de publication en ligne : 29/08/2019 En ligne : https://doi.org/10.1007/s10707-019-00378-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100390
in Geoinformatica > vol 25 n° 3 (July 2021) . - pp 453 - 483[article]Multisensor data fusion for cloud removal in global and all-season Sentinel-2 imagery / Patrick Ebel in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)
[article]
Titre : Multisensor data fusion for cloud removal in global and all-season Sentinel-2 imagery Type de document : Article/Communication Auteurs : Patrick Ebel, Auteur ; Andrea Meraner, Auteur ; Michael Schmitt, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 5866 - 5878 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection des nuages
[Termes IGN] données multicapteurs
[Termes IGN] image Sentinel-MSI
[Termes IGN] nuage
[Termes IGN] reconstruction d'image
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) The majority of optical observations acquired via spaceborne Earth imagery are affected by clouds. While there is numerous prior work on reconstructing cloud-covered information, previous studies are, oftentimes, confined to narrowly defined regions of interest, raising the question of whether an approach can generalize to a diverse set of observations acquired at variable cloud coverage or in different regions and seasons. We target the challenge of generalization by curating a large novel data set for training new cloud removal approaches and evaluate two recently proposed performance metrics of image quality and diversity. Our data set is the first publically available to contain a global sample of coregistered radar and optical observations, cloudy and cloud-free. Based on the observation that cloud coverage varies widely between clear skies and absolute coverage, we propose a novel model that can deal with either extreme and evaluate its performance on our proposed data set. Finally, we demonstrate the superiority of training models on real over synthetic data, underlining the need for a carefully curated data set of real observations. To facilitate future research, our data set is made available online. Numéro de notice : A2021-529 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3024744 Date de publication en ligne : 02/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3024744 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97980
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 7 (July 2021) . - pp 5866 - 5878[article]Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space Type de document : Article/Communication Auteurs : Min Wu, Auteur ; Xin Jin, Auteur ; Qian Jiang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1707 - 1729 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contraste de couleurs
[Termes IGN] données multiéchelles
[Termes IGN] image en couleur
[Termes IGN] image RVB
[Termes IGN] niveau de gris (image)
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image colorization technique is used to colorize the gray-level image or single-channel image, which is a very significant and challenging task in image processing, especially the colorization of remote sensing images. This paper proposes a new method for coloring remote sensing images based on deep convolution generation adversarial network. The adopted generator model is a symmetrical structure using the principle of auto-encoder, and a multi-scale convolutional module is specially designed to introduce into the generator model. Thus, the proposed generator can enable the whole model to retain more image features in the process of up-sampling and down-sampling. Meanwhile, the discriminator uses residual neural network 18 that can compete with the generator, so that the generator and discriminator can effectively optimize each other. In the proposed method, the color space transformation technique is first utilized to convert remote sensing images from RGB to YUV. Then, the Y channel (a gray-level image) is used as the input of the neural network model to predict UV channels. Finally, the predicted UV channels are concatenated with the original Y channel as a whole YUV that is then transformed into RGB space to get the final color image. Experiments are conducted to test the performance of different image colorization methods, and the results show that the proposed method has good performance in both visual quality and objective indexes on the colorization of remote sensing image. Numéro de notice : A2021-540 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01933-2 Date de publication en ligne : 28/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01933-2 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98018
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1707 - 1729[article]SemiCDNet: A semisupervised convolutional neural network for change detection in high resolution remote-sensing images / Daifeng Peng in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)
[article]
Titre : SemiCDNet: A semisupervised convolutional neural network for change detection in high resolution remote-sensing images Type de document : Article/Communication Auteurs : Daifeng Peng, Auteur ; Lorenzo Bruzzone, Auteur ; Yongjun Zhang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 5891 - 5906 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] bâtiment
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] réseau antagoniste génératif
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Change detection (CD) is one of the main applications of remote sensing. With the increasing popularity of deep learning, most recent developments of CD methods have introduced the use of deep learning techniques to increase the accuracy and automation level over traditional methods. However, when using supervised CD methods, a large amount of labeled data is needed to train deep convolutional networks with millions of parameters. These labeled data are difficult to acquire for CD tasks. To address this limitation, a novel semisupervised convolutional network for CD (SemiCDNet) is proposed based on a generative adversarial network (GAN). First, both the labeled data and unlabeled data are input into the segmentation network to produce initial predictions and entropy maps. Then, to exploit the potential of unlabeled data, two discriminators are adopted to enforce the feature distribution consistency of segmentation maps and entropy maps between the labeled and unlabeled data. During the competitive training, the generator is continuously regularized by utilizing the unlabeled information, thus improving its generalization capability. The effectiveness and reliability of our proposed method are verified on two high-resolution remote sensing data sets. Extensive experimental results demonstrate the superiority of the proposed method against other state-of-the-art approaches. Numéro de notice : A2021-530 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3011913 Date de publication en ligne : 06/08/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3011913 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97986
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 7 (July 2021) . - pp 5891 - 5906[article]Semantic hierarchy emerges in deep generative representations for scene synthesis / Ceyuan Yang in International journal of computer vision, vol 129 n° 5 (May 2021)PermalinkAmélioration des résolutions spatiale et spectrale d’images satellitaires par réseaux antagonistes / Anaïs Gastineau (2021)PermalinkPermalinkPermalinkPermalinkPermalinkGenerative adversarial networks to generalise urban areas in topographic maps / Azelle Courtial (2021)PermalinkLearning disentangled representations of satellite image time series in a weakly supervised manner / Eduardo Hugo Sanchez (2021)PermalinkSpectral variability in hyperspectral unmixing : Multiscale, tensor, and neural network-based approaches / Ricardo Augusto Borsoi (2021)PermalinkPermalinkUnderstanding the role of individual units in a deep neural network / David Bau in Proceedings of the National Academy of Sciences of the United States of America PNAS, vol 117 n° 48 (1 December 2020)PermalinkPermalink