Descripteur
Termes IGN > imagerie > image numérique > image optique > image panchromatique
image panchromatiqueVoir aussi |
Documents disponibles dans cette catégorie (150)



Etendre la recherche sur niveau(x) vers le bas
HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion / Kun Li in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)
![]()
[article]
Titre : HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion Type de document : Article/Communication Auteurs : Kun Li, Auteur ; Wei Zhang, Auteur ; Dian Yu, Auteur ; Xin Tian, Auteur Année de publication : 2022 Article en page(s) : pp 30 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image floue
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal profondRésumé : (Auteur) Traditional approaches mainly fuse a hyperspectral image (HSI) with a high-resolution multispectral image (MSI) to improve the spatial resolution of the HSI. However, such improvement in the spatial resolution of HSIs is still limited because the spatial resolution of MSIs remains low. To further improve the spatial resolution of HSIs, we propose HyperNet, a deep network for the fusion of HSI, MSI, and panchromatic image (PAN), which effectively injects the spatial details of an MSI and a PAN into an HSI while preserving the spectral information of the HSI. Thus, we design HyperNet on the basis of a uniform fusion strategy to solve the problem of complex fusion of three types of sources (i.e., HSI, MSI, and PAN). In particular, the spatial details of the MSI and the PAN are extracted by multiple specially designed multiscale-attention-enhance blocks in which multi-scale convolution is used to adaptively extract features from different reception fields, and two attention mechanisms are adopted to enhance the representation capability of features along the spectral and spatial dimensions, respectively. Through the capability of feature reuse and interaction in a specially designed dense-detail-insertion block, the previously extracted features are subsequently injected into the HSI according to the unidirectional feature propagation among the layers of dense connection. Finally, we construct an efficient loss function by integrating the multi-scale structural similarity index with the norm, which drives HyperNet to generate high-quality results with a good balance between spatial and spectral qualities. Extensive experiments on simulated and real data sets qualitatively and quantitatively demonstrate the superiority of HyperNet over other state-of-the-art methods. Numéro de notice : A2022-272 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.04.001 Date de publication en ligne : 07/04/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.04.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100461
in ISPRS Journal of photogrammetry and remote sensing > vol 188 (June 2022) . - pp 30 - 44[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022061 SL Revue Centre de documentation Revues en salle Disponible 081-2022063 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022062 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Research on automatic identification method of terraces on the Loess plateau based on deep transfer learning / Mingge Yu in Remote sensing, vol 14 n° 10 (May-2 2022)
![]()
[article]
Titre : Research on automatic identification method of terraces on the Loess plateau based on deep transfer learning Type de document : Article/Communication Auteurs : Mingge Yu, Auteur ; Xiaoping Rui, Auteur ; Weiyi Xie, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2446 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] échantillonnage
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image panchromatique
[Termes IGN] image Worldview
[Termes IGN] modèle de simulation
[Termes IGN] surface cultivée
[Termes IGN] terrasseRésumé : (auteur) Rapid, accurate extraction of terraces from high-resolution images is of great significance for promoting the application of remote-sensing information in soil and water conservation planning and monitoring. To solve the problem of how deep learning requires a large number of labeled samples to achieve good accuracy, this article proposes an automatic identification method for terraces that can obtain high precision through small sample datasets. Firstly, a terrace identification source model adapted to multiple data sources is trained based on the WorldView-1 dataset. The model can be migrated to other types of images for terracing extraction as a pre-trained model. Secondly, to solve the small sample problem, a deep transfer learning method for accurate pixel-level extraction of high-resolution remote-sensing image terraces is proposed. Finally, to solve the problem of insufficient boundary information and splicing traces during prediction, a strategy of ignoring edges is proposed, and a prediction model is constructed to further improve the accuracy of terrace identification. In this paper, three regions outside the sample area are randomly selected, and the OA, F1 score, and MIoU averages reach 93.12%, 91.40%, and 89.90%, respectively. The experimental results show that this method, based on deep transfer learning, can accurately extract terraced field surfaces and segment terraced field boundaries. Numéro de notice : A2022-402 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14102446 Date de publication en ligne : 19/05/2022 En ligne : https://doi.org/10.3390/rs14102446 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100705
in Remote sensing > vol 14 n° 10 (May-2 2022) . - n° 2446[article]A PCA-PD fusion method for change detection in remote sensing multi temporal images / Soltana Achour in Geocarto international, vol 37 n° 1 ([01/01/2022])
![]()
[article]
Titre : A PCA-PD fusion method for change detection in remote sensing multi temporal images Type de document : Article/Communication Auteurs : Soltana Achour, Auteur ; Miloud Chikr Elmezouar, Auteur ; Nasreddine Taleb, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 196 - 213 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse en composantes principales
[Termes IGN] détection automatique
[Termes IGN] détection de changement
[Termes IGN] fusion de données
[Termes IGN] image multibande
[Termes IGN] image multitemporelle
[Termes IGN] image panchromatique
[Termes IGN] méthode statistique
[Termes IGN] seuillage d'imageRésumé : (auteur) In remote sensing, for applications as environment monitoring, change detection based on image processing is one of the most important techniques. To reach high performance various techniques of fusion are exploited using a combination of multi-temporal, multispectral and panchromatic satellite images. A solution for handling such kind of images holds when using some simple statistical methods like the Percent Difference (PD) technique as well as the Principal Component Analysis (PCA) one. In this paper, an automatic change detection method issued from the two previous techniques is proposed and applied on multispectral and panchromatic images captured by a high resolution optical satellite. This approach is characterized by two aspects: the first one consists of the fusion of the different data and the second one performs the detection of the changes for the resulting images. The experimental results show the reasonable quantitative performance and the effectiveness of the proposed method for change detection, consisting of an automatic extraction of most of change information as well as the obtention of better results for most precision metrics consisting of an overall accuracy of up to 91% and a Kappa coefficient of up to 66%, comparing to those obtained using the simple PD and PCA techniques. Numéro de notice : A2022-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1713228 Date de publication en ligne : 10/02/2020 En ligne : https://doi.org/10.1080/10106049.2020.1713228 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99441
in Geocarto international > vol 37 n° 1 [01/01/2022] . - pp 196 - 213[article]Hyperspectral image fusion and multitemporal image fusion by joint sparsity / Han Pan in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 9 (September 2021)
![]()
[article]
Titre : Hyperspectral image fusion and multitemporal image fusion by joint sparsity Type de document : Article/Communication Auteurs : Han Pan, Auteur ; Zhongliang Jing, Auteur ; Henry Leung, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 7887 - 7900 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] correction d'image
[Termes IGN] flou
[Termes IGN] fusion d'images
[Termes IGN] image hyperspectrale
[Termes IGN] image multitemporelle
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] représentation parcimonieuseRésumé : (auteur) Different image fusion systems have been developed to deal with the massive amounts of image data for different applications, such as remote sensing, computer vision, and environment monitoring. However, the generalizability and versatility of these fusion systems remain unknown. This article proposes an efficient regularization framework to achieve different kinds of fusion tasks accounting for the spatiospectral and spatiotemporal variabilities of the fusion process. A joint minimization functional is developed by taking an advantage of a composite regularizer for enforcing joint sparsity in the gradient domain and the frame domain. The proposed composite regularizer is composed of the Hessian Schatten-norm regularization and contourlet-based regularization terms. The resulting problems are solved by the alternating direction method of multipliers (ADMM). The effectiveness of the proposed method is validated in a variety of image fusion experiments: 1) hyperspectral (HS) and panchromatic image fusion; 2) HS and multispectral image fusion; 3) multitemporal image fusion (MIF); and 4) multi-image deblurring. Results show promising performance compared with state-of-the-art fusion methods. Numéro de notice : A2021-649 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3039046 Date de publication en ligne : 07/12/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3039046 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98360
in IEEE Transactions on geoscience and remote sensing > Vol 59 n° 9 (September 2021) . - pp 7887 - 7900[article]Detail injection-based deep convolutional neural networks for pansharpening / Liang-Jian Deng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
![]()
[article]
Titre : Detail injection-based deep convolutional neural networks for pansharpening Type de document : Article/Communication Auteurs : Liang-Jian Deng, Auteur ; Gemine Vivone, Auteur ; Cheng Jin, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6995 - 7010 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image à basse résolution
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] injection d'image
[Termes IGN] modèle non linéaire
[Termes IGN] pansharpening (fusion d'images)Résumé : (auteur) The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods. Numéro de notice : A2021-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3031366 En ligne : https://doi.org/10.1109/TGRS.2020.3031366 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98293
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6995 - 7010[article]Pan-sharpening via multiscale dynamic convolutional neural network / Jianwen Hu in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)
PermalinkGTP-PNet: A residual learning network based on gradient transformation prior for pansharpening / Hao Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 172 (February 2021)
PermalinkAmélioration des résolutions spatiale et spectrale d’images satellitaires par réseaux antagonistes / Anaïs Gastineau (2021)
PermalinkA review of image fusion techniques for pan-sharpening of high-resolution satellite imagery / Farzaneh Dadrass Javan in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)
PermalinkPansharpening: context-based generalized Laplacian pyramids by robust regression / Gemine Vivone in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
PermalinkUnsupervised classification of multispectral images embedded with a segmentation of panchromatic images using localized clusters / Ting Mao in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
PermalinkGeometric accuracy improvement of WorldView‐2 imagery using freely available DEM data / Mateo Gašparović in Photogrammetric record, vol 34 n° 167 (September 2019)
PermalinkPan-sharpening via deep metric learning / Yinghui Xing in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
PermalinkEstimating forest canopy cover in black locust (Robinia pseudoacacia L.) plantations on the loess plateau using random forest / Qingxia Zhao in Forests, vol 9 n° 10 (October 2018)
PermalinkPermalink