Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond > réseau neuronal artificiel > réseau neuronal profond
réseau neuronal profond |
Documents disponibles dans cette catégorie (15)



Etendre la recherche sur niveau(x) vers le bas
HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion / Kun Li in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)
![]()
[article]
Titre : HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion Type de document : Article/Communication Auteurs : Kun Li, Auteur ; Wei Zhang, Auteur ; Dian Yu, Auteur ; Xin Tian, Auteur Année de publication : 2022 Article en page(s) : pp 30 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image floue
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal profondRésumé : (Auteur) Traditional approaches mainly fuse a hyperspectral image (HSI) with a high-resolution multispectral image (MSI) to improve the spatial resolution of the HSI. However, such improvement in the spatial resolution of HSIs is still limited because the spatial resolution of MSIs remains low. To further improve the spatial resolution of HSIs, we propose HyperNet, a deep network for the fusion of HSI, MSI, and panchromatic image (PAN), which effectively injects the spatial details of an MSI and a PAN into an HSI while preserving the spectral information of the HSI. Thus, we design HyperNet on the basis of a uniform fusion strategy to solve the problem of complex fusion of three types of sources (i.e., HSI, MSI, and PAN). In particular, the spatial details of the MSI and the PAN are extracted by multiple specially designed multiscale-attention-enhance blocks in which multi-scale convolution is used to adaptively extract features from different reception fields, and two attention mechanisms are adopted to enhance the representation capability of features along the spectral and spatial dimensions, respectively. Through the capability of feature reuse and interaction in a specially designed dense-detail-insertion block, the previously extracted features are subsequently injected into the HSI according to the unidirectional feature propagation among the layers of dense connection. Finally, we construct an efficient loss function by integrating the multi-scale structural similarity index with the norm, which drives HyperNet to generate high-quality results with a good balance between spatial and spectral qualities. Extensive experiments on simulated and real data sets qualitatively and quantitatively demonstrate the superiority of HyperNet over other state-of-the-art methods. Numéro de notice : A2022-272 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.04.001 Date de publication en ligne : 07/04/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.04.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100461
in ISPRS Journal of photogrammetry and remote sensing > vol 188 (June 2022) . - pp 30 - 44[article]A deep multi-modal learning method and a new RGB-depth data set for building roof extraction / Mehdi Khoshboresh Masouleh in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 10 (October 2021)
![]()
[article]
Titre : A deep multi-modal learning method and a new RGB-depth data set for building roof extraction Type de document : Article/Communication Auteurs : Mehdi Khoshboresh Masouleh, Auteur ; Reza Shah-Hosseini, Auteur Année de publication : 2021 Article en page(s) : pp 759 - 766 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] détection du bâti
[Termes IGN] données multisources
[Termes IGN] effet de profondeur cinétique
[Termes IGN] empreinte
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] Indiana (Etats-Unis)
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation d'image
[Termes IGN] superpixel
[Termes IGN] toitRésumé : (Auteur) This study focuses on tackling the challenge of building mapping in multi-modal remote sensing data by proposing a novel, deep superpixel-wise convolutional neural network called DeepQuantized-Net, plus a new red, green, blue (RGB)-depth data set named IND. DeepQuantized-Net incorporated two practical ideas in segmentation: first, improving the object pattern with the exploitation of superpixels instead of pixels, as the imaging unit in DeepQuantized-Net. Second, the reduction of computational cost. The generated data set includes 294 RGB-depth images (256 training images and 38 test images) from different locations in the state of Indiana in the U.S., with 1024 × 1024 pixels and a spatial resolution of 0.5 ftthat covers different cities. The experimental results using the IND data set demonstrates the mean F1 scores and the average Intersection over Union scores could increase by approximately 7.0% and 7.2% compared to other methods, respectively. Numéro de notice : A2021-677 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00007R2 Date de publication en ligne : 01/10/2021 En ligne : https://doi.org/10.14358/PERS.21-00007R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98878
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 10 (October 2021) . - pp 759 - 766[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021101 SL Revue Centre de documentation Revues en salle Disponible A deep translation (GAN) based change detection network for optical and SAR remote sensing images / Xinghua Li in ISPRS Journal of photogrammetry and remote sensing, vol 179 (September 2021)
![]()
[article]
Titre : A deep translation (GAN) based change detection network for optical and SAR remote sensing images Type de document : Article/Communication Auteurs : Xinghua Li, Auteur ; Zhengshun Du, Auteur ; Yanyuan Huang, Auteur ; Zhenyu Tan, Auteur Année de publication : 2021 Article en page(s) : pp 14 - 34 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] détection de changement
[Termes IGN] image à très haute résolution
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] méthode robuste
[Termes IGN] polarisation
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal profond
[Termes IGN] zone d'intérêtRésumé : (Editeur) With the development of space-based imaging technology, a larger and larger number of images with different modalities and resolutions are available. The optical images reflect the abundant spectral information and geometric shape of ground objects, whose qualities are degraded easily in poor atmospheric conditions. Although synthetic aperture radar (SAR) images cannot provide the spectral features of the region of interest (ROI), they can capture all-weather and all-time polarization information. In nature, optical and SAR images encapsulate lots of complementary information, which is of great significance for change detection (CD) in poor weather situations. However, due to the difference in imaging mechanisms of optical and SAR images, it is difficult to conduct their CD directly using the traditional difference or ratio algorithms. Most recent CD methods bring image translation to reduce their difference, but the results are obtained by ordinary algebraic methods and threshold segmentation with limited accuracy. Towards this end, this work proposes a deep translation based change detection network (DTCDN) for optical and SAR images. The deep translation firstly maps images from one domain (e.g., optical) to another domain (e.g., SAR) through a cyclic structure into the same feature space. With the similar characteristics after deep translation, they become comparable. Different from most previous researches, the translation results are imported to a supervised CD network that utilizes deep context features to separate the unchanged pixels and changed pixels. In the experiments, the proposed DTCDN was tested on four representative data sets from Gloucester, California, and Shuguang village. Compared with state-of-the-art methods, the effectiveness and robustness of the proposed method were confirmed. Numéro de notice : A2021-574 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.07.007 Date de publication en ligne : 23/07/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.07.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98174
in ISPRS Journal of photogrammetry and remote sensing > vol 179 (September 2021) . - pp 14 - 34[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021091 SL Revue Centre de documentation Revues en salle Disponible 081-2021093 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2021092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Two hidden layer neural network-based rotation forest ensemble for hyperspectral image classification / Laxmi Narayana Eeti in Geocarto international, vol 36 n° 16 ([01/09/2021])
![]()
[article]
Titre : Two hidden layer neural network-based rotation forest ensemble for hyperspectral image classification Type de document : Article/Communication Auteurs : Laxmi Narayana Eeti, Auteur ; Krishna Mohan Buddhiraju, Auteur Année de publication : 2021 Article en page(s) : pp 1820 - 1837 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] arbre de décision
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] ensachage
[Termes IGN] image AVIRIS
[Termes IGN] image EO1-Hyperion
[Termes IGN] image hyperspectrale
[Termes IGN] image ROSIS
[Termes IGN] Perceptron multicouche
[Termes IGN] précision de la classification
[Termes IGN] réseau neuronal profond
[Termes IGN] Rotation Forest classificationRésumé : (auteur) Decision tree-based Rotation Forest could generate satisfactory but lower classification accuracy for a given training sample set and image data, owing to the inherent disadvantages in decision trees, namely myopic, replication and fragmentation problem. To improve performance of Rotation Forest technique, we propose to utilize two-hidden-layered-feedforward neural network as base classifier instead of decision tree. We examine the classification performance of proposed model under two situations, namely when free network parameters are maintained the same across all ensemble components and otherwise. The proposed model, where each component is initialized with different pair of initial weights and bias, performs better than decision tree-based Rotation Forest on three different Hyperspectral sensor datasets – AVIRIS, ROSIS and Hyperion. Improvements in classification accuracy are above 2% and up to 3% depending upon dataset. Also, the proposed model achieves improvement in accuracy over Random Forest in the range 4.2–8.8%. Numéro de notice : A2021-581 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2019.1678680 Date de publication en ligne : 21/10/2019 En ligne : https://doi.org/10.1080/10106049.2019.1678680 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98193
in Geocarto international > vol 36 n° 16 [01/09/2021] . - pp 1820 - 1837[article]Simulating multi-exit evacuation using deep reinforcement learning / Dong Xu in Transactions in GIS, Vol 25 n° 3 (June 2021)
![]()
[article]
Titre : Simulating multi-exit evacuation using deep reinforcement learning Type de document : Article/Communication Auteurs : Dong Xu, Auteur ; Xiao Huang, Auteur ; Joseph Mango, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1542-1564 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse comparative
[Termes IGN] apprentissage par renforcement
[Termes IGN] distribution spatiale
[Termes IGN] itinéraire piétionnier
[Termes IGN] modèle de simulation
[Termes IGN] réseau neuronal profondRésumé : (Auteur) Conventional simulations on multi-exit indoor evacuation focus primarily on how to determine a reasonable exit based on numerous factors in a changing environment. Results commonly include some congested and other under-utilized exits, especially with large numbers of pedestrians. We propose a multi-exit evacuation simulation based on deep reinforcement learning (DRL), referred to as the MultiExit-DRL, which involves a deep neural network (DNN) framework to facilitate state-to-action mapping. The DNN framework applies Rainbow Deep Q-Network (DQN), a DRL algorithm that integrates several advanced DQN methods, to improve data utilization and algorithm stability and further divides the action space into eight isometric directions for possible pedestrian choices. We compare MultiExit-DRL with two conventional multi-exit evacuation simulation models in three separate scenarios: varying pedestrian distribution ratios; varying exit width ratios; and varying open schedules for an exit. The results show that MultiExit-DRL presents great learning efficiency while reducing the total number of evacuation frames in all designed experiments. In addition, the integration of DRL allows pedestrians to explore other potential exits and helps determine optimal directions, leading to a high efficiency of exit utilization. Numéro de notice : A2021-466 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE Nature : Numéro de périodique nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12738 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1111/tgis.12738 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98085
in Transactions in GIS > Vol 25 n° 3 (June 2021) . - pp 1542-1564[article]Deep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)
PermalinkExploration of reinforcement learning algorithms for autonomous vehicle visual perception and control / Florence Carton (2021)
PermalinkPermalinkA deep learning architecture for semantic address matching / Yue Lin in International journal of geographical information science IJGIS, vol 34 n° 3 (March 2020)
PermalinkVolcano-seismic transfer learning and uncertainty quantification with bayesian neural networks / Angel Bueno in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)
PermalinkSuperpixel-enhanced deep neural forest for remote sensing image semantic segmentation / Li Mi in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
PermalinkTorch-Points3D: A modular multi-task framework for reproducible deep learning on 3D point clouds / Thomas Chaton (2020)
PermalinkUnsupervised satellite image time series analysis using deep learning techniques / Ekaterina Kalinicheva (2020)
PermalinkAddressing overfitting on point cloud classification using Atrous XCRF / Hasan Asy’ari Arief in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)
PermalinkPermalink