Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2138)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Detail injection-based deep convolutional neural networks for pansharpening / Liang-Jian Deng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : Detail injection-based deep convolutional neural networks for pansharpening Type de document : Article/Communication Auteurs : Liang-Jian Deng, Auteur ; Gemine Vivone, Auteur ; Cheng Jin, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6995 - 7010 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image à basse résolution
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] injection d'image
[Termes IGN] modèle non linéaire
[Termes IGN] pansharpening (fusion d'images)Résumé : (auteur) The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods. Numéro de notice : A2021-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3031366 En ligne : https://doi.org/10.1109/TGRS.2020.3031366 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98293
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6995 - 7010[article]CNN-based RGB-D salient object detection: Learn, select, and fuse / Hao Chen in International journal of computer vision, vol 129 n° 7 (July 2021)
[article]
Titre : CNN-based RGB-D salient object detection: Learn, select, and fuse Type de document : Article/Communication Auteurs : Hao Chen, Auteur ; Yongjian Deng, Auteur ; Guosheng Lin, Auteur Année de publication : 2021 Article en page(s) : pp 2076 - 2096 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] profondeur
[Termes IGN] saillance
[Termes IGN] segmentation sémantiqueRésumé : (auteur) The goal of this work is to present a systematic solution for RGB-D salient object detection, which addresses the following three aspects with a unified framework: modal-specific representation learning, complementary cue selection, and cross-modal complement fusion. To learn discriminative modal-specific features, we propose a hierarchical cross-modal distillation scheme, in which we use the progressive predictions from the well-learned source modality to supervise learning feature hierarchies and inference in the new modality. To better select complementary cues, we formulate a residual function to incorporate complements from the paired modality adaptively. Furthermore, a top-down fusion structure is constructed for sufficient cross-modal cross-level interactions. The experimental results demonstrate the effectiveness of the proposed cross-modal distillation scheme in learning from a new modality, the advantages of the proposed multi-modal fusion pattern in selecting and fusing cross-modal complements, and the generalization of the proposed designs in different tasks. Numéro de notice : A2021-697 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01452-0 Date de publication en ligne : 05/05/2021 En ligne : https://doi.org/10.1007/s11263-021-01452-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98532
in International journal of computer vision > vol 129 n° 7 (July 2021) . - pp 2076 - 2096[article]Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space Type de document : Article/Communication Auteurs : Min Wu, Auteur ; Xin Jin, Auteur ; Qian Jiang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1707 - 1729 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contraste de couleurs
[Termes IGN] données multiéchelles
[Termes IGN] image en couleur
[Termes IGN] image RVB
[Termes IGN] niveau de gris (image)
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image colorization technique is used to colorize the gray-level image or single-channel image, which is a very significant and challenging task in image processing, especially the colorization of remote sensing images. This paper proposes a new method for coloring remote sensing images based on deep convolution generation adversarial network. The adopted generator model is a symmetrical structure using the principle of auto-encoder, and a multi-scale convolutional module is specially designed to introduce into the generator model. Thus, the proposed generator can enable the whole model to retain more image features in the process of up-sampling and down-sampling. Meanwhile, the discriminator uses residual neural network 18 that can compete with the generator, so that the generator and discriminator can effectively optimize each other. In the proposed method, the color space transformation technique is first utilized to convert remote sensing images from RGB to YUV. Then, the Y channel (a gray-level image) is used as the input of the neural network model to predict UV channels. Finally, the predicted UV channels are concatenated with the original Y channel as a whole YUV that is then transformed into RGB space to get the final color image. Experiments are conducted to test the performance of different image colorization methods, and the results show that the proposed method has good performance in both visual quality and objective indexes on the colorization of remote sensing image. Numéro de notice : A2021-540 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01933-2 Date de publication en ligne : 28/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01933-2 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98018
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1707 - 1729[article]Semantic-aware label placement for augmented reality in street view / Jianqing Jia in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Semantic-aware label placement for augmented reality in street view Type de document : Article/Communication Auteurs : Jianqing Jia, Auteur ; Semir Elezovikj, Auteur ; Heng Fan, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1805 - 1819 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] image Streetview
[Termes IGN] information sémantique
[Termes IGN] optimisation (mathématiques)
[Termes IGN] point d'intérêt
[Termes IGN] réalité augmentée
[Termes IGN] saillance
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In an augmented reality (AR) application, placing labels in a manner that is clear and readable without occluding the critical information from the real world can be a challenging problem. This paper introduces a label placement technique for AR used in street view scenarios. We propose a semantic-aware task-specific label placement method by identifying potentially important image regions through a novel feature map, which we refer to as guidance map. Given an input image, its saliency information, semantic information and the task-specific importance prior are integrated in the guidance map for our labeling task. To learn the task prior, we created a label placement dataset with the users’ labeling preferences, as well as use it for evaluation. Our solution encodes the constraints for placing labels in an optimization problem to obtain the final label layout, and the labels will be placed in appropriate positions to reduce the chances of overlaying important real-world objects in street view AR scenarios. The experimental validation shows clearly the benefits of our method over previous solutions in the AR street view navigation and similar applications. Numéro de notice : A2021-542 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01939-w Date de publication en ligne : 02/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01939-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98022
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1805 - 1819[article]Semantic unsupervised change detection of natural land cover with multitemporal object-based analysis on SAR images / Donato Amitrano in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)
[article]
Titre : Semantic unsupervised change detection of natural land cover with multitemporal object-based analysis on SAR images Type de document : Article/Communication Auteurs : Donato Amitrano, Auteur ; Raffaella Guida, Auteur ; Pasquale Lervolino, Auteur Année de publication : 2021 Article en page(s) : pp 5494 - 5514 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse d'image orientée objet
[Termes IGN] biomasse forestière
[Termes IGN] canopée
[Termes IGN] changement d'occupation du sol
[Termes IGN] classification floue
[Termes IGN] classification non dirigée
[Termes IGN] déboisement
[Termes IGN] détection de changement
[Termes IGN] image multitemporelle
[Termes IGN] image radar moirée
[Termes IGN] image RVB
[Termes IGN] image Sentinel-SAR
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] segmentation d'image
[Termes IGN] seuillage d'image
[Termes IGN] texture d'imageRésumé : (auteur) Change detection is one of the most addressed topics in the remote sensing community. When performed on synthetic aperture radar images, the most critical issues are as follows: 1) the labeling of the identified changing patterns and 2) the scarce robustness of classic pixel-based approaches based on threshold segmentation of an appropriate change index, which tend to fail when multiple changes are present in the study area. In this work, a new methodology for unsupervised change detection in vegetation canopy is presented. It overcomes these limitations by exploiting multitemporal geographical object-based image analysis with the aim to make the intrinsic semantic of data emerge and direct the processing toward the identification of precise classes of changes through dictionary-based preclassification and fuzzy combination of class-specific information layers. The proposed methodology has been tested in ten different experiments covering agriculture and clear-cut deforestation applications. The results, validated against literature methods, highlighted the superiority of the proposed approach, which was quantitatively assessed in terms of standard classification quality parameters. On agriculture experiments, it allowed for an average increase in the detection accuracy of about 11% with respect to the best performing literature method, with an increment of the false alarm rate in the order of 0.5%. In case of deforestation, the registered detection accuracy was comparable to that achieved by the literature, while the most significant benefit was the reduction, of more than one-third, of the number of detected false deforestation patterns. Overall, the main characteristics of the proposed architecture are the robustness and the lack of any supervision, which makes it very well-suited for operational scenarios. Numéro de notice : A2021-528 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3029841 Date de publication en ligne : 22/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3029841 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97978
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 7 (July 2021) . - pp 5494 - 5514[article]Spatio-temporal-spectral observation model for urban remote sensing / Zhenfeng Shao in Geo-spatial Information Science, vol 24 n° 3 (July 2021)PermalinkTarget-constrained interference-minimized band selection for hyperspectral target detection / Xiaodi Shang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkForest cover mapping and Pinus species classification using very high-resolution satellite images and random forest / Laura Alonso-Martinez in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkMarrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images / Guillemette Fonteix in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkTowards efficient indoor/outdoor registration using planar polygons / Rahima Djahel in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkAn automatic workflow for orientation of historical images with large radiometric and geometric differences / Ferdinand Maiwald in Photogrammetric record, vol 36 n° 174 (June 2021)PermalinkAn incremental isomap method for hyperspectral dimensionality reduction and classification / Yi Ma in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)PermalinkApplication of feature selection methods and machine learning algorithms for saltmarsh biomass estimation using Worldview-2 imagery / Sikdar M. M. Rasel in Geocarto international, vol 36 n° 10 ([01/06/2021])PermalinkEvaluating the performance of hyperspectral leaf reflectance to detect water stress and estimation of photosynthetic capacities / Jingjing Zhou in Remote sensing, vol 13 n° 11 (June-1 2021)PermalinkMultiscale context-aware ensemble deep KELM for efficient hyperspectral image classification / Bobo Xi in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)PermalinkResolution enhancement for large-scale land cover mapping via weakly supervised deep learning / Qiutong Yu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)PermalinkSemantic signatures for large-scale visual localization / Li Weng in Multimedia tools and applications, vol 80 n° 15 (June 2021)PermalinkUncertainty management for robust probabilistic change detection from multi-temporal Geoeye-1 imagery / Mahmoud Salah in Applied geomatics, vol 13 n° 2 (June 2021)PermalinkSpherically optimized RANSAC aided by an IMU for Fisheye Image Matching / Anbang Liang in Remote sensing, vol 13 n°10 (May-2 2021)PermalinkLearning from multimodal and multitemporal earth observation data for building damage mapping / Bruno Adriano in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)PermalinkLifting scheme-based sparse density feature extraction for remote sensing target detection / Ling Tian in Remote sensing, vol 13 n° 9 (May-1 2021)PermalinkPerformance evaluation of artificial neural networks for natural terrain classification / Perpetual Hope Akwensi in Applied geomatics, vol 13 n° 1 (May 2021)PermalinkAssessing forest phenology: A multi-scale comparison of near-surface (UAV, spectral reflectance sensor, PhenoCam) and satellite (MODIS, Sentinel-2) remote sensing / Shangharsha Thapa in Remote sensing, vol 13 n° 8 (April-2 2021)PermalinkDEM resolution influences on peak flow prediction: a comparison of two different based DEMs through various rescaling techniques / Ali H. Ahmed Suliman in Geocarto international, vol 36 n° 7 ([15/04/2021])PermalinkUnsupervised multi-level feature extraction for improvement of hyperspectral classification / Qiaoqiao Sun in Remote sensing, vol 13 n° 8 (April-2 2021)Permalink