Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (157)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Detail injection-based deep convolutional neural networks for pansharpening / Liang-Jian Deng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : Detail injection-based deep convolutional neural networks for pansharpening Type de document : Article/Communication Auteurs : Liang-Jian Deng, Auteur ; Gemine Vivone, Auteur ; Cheng Jin, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6995 - 7010 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image à basse résolution
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] injection d'image
[Termes IGN] modèle non linéaire
[Termes IGN] pansharpening (fusion d'images)Résumé : (auteur) The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods. Numéro de notice : A2021-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3031366 En ligne : https://doi.org/10.1109/TGRS.2020.3031366 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98293
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6995 - 7010[article]CNN-based RGB-D salient object detection: Learn, select, and fuse / Hao Chen in International journal of computer vision, vol 129 n° 7 (July 2021)
[article]
Titre : CNN-based RGB-D salient object detection: Learn, select, and fuse Type de document : Article/Communication Auteurs : Hao Chen, Auteur ; Yongjian Deng, Auteur ; Guosheng Lin, Auteur Année de publication : 2021 Article en page(s) : pp 2076 - 2096 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] profondeur
[Termes IGN] saillance
[Termes IGN] segmentation sémantiqueRésumé : (auteur) The goal of this work is to present a systematic solution for RGB-D salient object detection, which addresses the following three aspects with a unified framework: modal-specific representation learning, complementary cue selection, and cross-modal complement fusion. To learn discriminative modal-specific features, we propose a hierarchical cross-modal distillation scheme, in which we use the progressive predictions from the well-learned source modality to supervise learning feature hierarchies and inference in the new modality. To better select complementary cues, we formulate a residual function to incorporate complements from the paired modality adaptively. Furthermore, a top-down fusion structure is constructed for sufficient cross-modal cross-level interactions. The experimental results demonstrate the effectiveness of the proposed cross-modal distillation scheme in learning from a new modality, the advantages of the proposed multi-modal fusion pattern in selecting and fusing cross-modal complements, and the generalization of the proposed designs in different tasks. Numéro de notice : A2021-697 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01452-0 Date de publication en ligne : 05/05/2021 En ligne : https://doi.org/10.1007/s11263-021-01452-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98532
in International journal of computer vision > vol 129 n° 7 (July 2021) . - pp 2076 - 2096[article]Flood depth mapping in street photos with image processing and deep neural networks / Bahareh Alizadeh Kharazi in Computers, Environment and Urban Systems, vol 88 (July 2021)
[article]
Titre : Flood depth mapping in street photos with image processing and deep neural networks Type de document : Article/Communication Auteurs : Bahareh Alizadeh Kharazi, Auteur ; Amir H. Behzadan, Auteur Année de publication : 2021 Article en page(s) : n° 101628 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] Canada
[Termes IGN] centre urbain
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] crue
[Termes IGN] détection de contours
[Termes IGN] Etats-Unis
[Termes IGN] image Streetview
[Termes IGN] inondation
[Termes IGN] profondeur
[Termes IGN] signalisation routière
[Termes IGN] système d'aide à la décision
[Termes IGN] traitement d'image
[Termes IGN] transformation de Hough
[Termes IGN] zone urbaineRésumé : (auteur) Many parts of the world experience severe episodes of flooding every year. In addition to the high cost of mitigation and damage to property, floods make roads impassable and hamper community evacuation, movement of goods and services, and rescue missions. Knowing the depth of floodwater is critical to the success of response and recovery operations that follow. However, flood mapping especially in urban areas using traditional methods such as remote sensing and digital elevation models (DEMs) yields large errors due to reshaped surface topography and microtopographic variations combined with vegetation bias. This paper presents a deep neural network approach to detect submerged stop signs in photos taken from flooded roads and intersections, coupled with Canny edge detection and probabilistic Hough transform to calculate pole length and estimate floodwater depth. Additionally, a tilt correction technique is implemented to address the problem of sideways tilt in visual analysis of submerged stop signs. An in-house dataset, named BluPix 2020.1 consisting of paired web-mined photos of submerged stop signs across 10 FEMA regions (for U.S. locations) and Canada is used to evaluate the models. Overall, pole length is estimated with an RMSE of 17.43 and 8.61 in. in pre- and post-flood photos, respectively, leading to a mean absolute error of 12.63 in. in floodwater depth estimation. Findings of this research are sought to equip jurisdictions, local governments, and citizens in flood-prone regions with a simple, reliable, and scalable solution that can provide (near-) real time estimation of floodwater depth in their surroundings. Numéro de notice : A2021-358 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2021.101628 Date de publication en ligne : 01/04/2021 En ligne : https://doi.org/10.1016/j.compenvurbsys.2021.101628 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97620
in Computers, Environment and Urban Systems > vol 88 (July 2021) . - n° 101628[article]A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases / Chun Yang in ISPRS Journal of photogrammetry and remote sensing, vol 177 (July 2021)
[article]
Titre : A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases Type de document : Article/Communication Auteurs : Chun Yang, Auteur ; Franz Rottensteiner, Auteur ; Christian Heipke, Auteur Année de publication : 2021 Article en page(s) : pp 38 - 56 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Bases de données localisées
[Termes IGN] Allemagne
[Termes IGN] apprentissage profond
[Termes IGN] approche hiérarchique
[Termes IGN] classification automatique d'objets
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image aérienne
[Termes IGN] jointure
[Termes IGN] objet géographique
[Termes IGN] occupation du sol
[Termes IGN] optimisation (mathématiques)
[Termes IGN] utilisation du solRésumé : (Auteur) Land use as contained in geospatial databases constitutes an essential input for different applications such as urban management, regional planning and environmental monitoring. In this paper, a hierarchical deep learning framework is proposed to verify the land use information. For this purpose, a two-step strategy is applied. First, given high-resolution aerial images, the land cover information is determined. To achieve this, an encoder-decoder based convolutional neural network (CNN) is proposed. Second, the pixel-wise land cover information along with the aerial images serves as input for another CNN to classify land use. Because the object catalogue of geospatial databases is frequently constructed in a hierarchical manner, we propose a new CNN-based method aiming to predict land use in multiple levels hierarchically and simultaneously. A so called Joint Optimization (JO) is proposed where predictions are made by selecting the hierarchical tuple over all levels which has the maximum joint class scores, providing consistent results across the different levels. The conducted experiments show that the CNN relying on JO outperforms previous results, achieving an overall accuracy up to 92.5%. In addition to the individual experiments on two test sites, we investigate whether data showing different characteristics can improve the results of land cover and land use classification, when processed together. To do so, we combine the two datasets and undertake some additional experiments. The results show that adding more data helps both land cover and land use classification, especially the identification of underrepresented categories, despite their different characteristics. Numéro de notice : A2021-370 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.04.022 Date de publication en ligne : 13/05/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.04.022 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97774
in ISPRS Journal of photogrammetry and remote sensing > vol 177 (July 2021) . - pp 38 - 56[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021071 SL Revue Centre de documentation Revues en salle Disponible 081-2021073 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021072 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space Type de document : Article/Communication Auteurs : Min Wu, Auteur ; Xin Jin, Auteur ; Qian Jiang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1707 - 1729 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contraste de couleurs
[Termes IGN] données multiéchelles
[Termes IGN] image en couleur
[Termes IGN] image RVB
[Termes IGN] niveau de gris (image)
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image colorization technique is used to colorize the gray-level image or single-channel image, which is a very significant and challenging task in image processing, especially the colorization of remote sensing images. This paper proposes a new method for coloring remote sensing images based on deep convolution generation adversarial network. The adopted generator model is a symmetrical structure using the principle of auto-encoder, and a multi-scale convolutional module is specially designed to introduce into the generator model. Thus, the proposed generator can enable the whole model to retain more image features in the process of up-sampling and down-sampling. Meanwhile, the discriminator uses residual neural network 18 that can compete with the generator, so that the generator and discriminator can effectively optimize each other. In the proposed method, the color space transformation technique is first utilized to convert remote sensing images from RGB to YUV. Then, the Y channel (a gray-level image) is used as the input of the neural network model to predict UV channels. Finally, the predicted UV channels are concatenated with the original Y channel as a whole YUV that is then transformed into RGB space to get the final color image. Experiments are conducted to test the performance of different image colorization methods, and the results show that the proposed method has good performance in both visual quality and objective indexes on the colorization of remote sensing image. Numéro de notice : A2021-540 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01933-2 Date de publication en ligne : 28/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01933-2 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98018
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1707 - 1729[article]SemiCDNet: A semisupervised convolutional neural network for change detection in high resolution remote-sensing images / Daifeng Peng in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkTrajectory and image-based detection and identification of UAV / Yicheng Liu in The Visual Computer, vol 37 n° 7 (July 2021)PermalinkUsing information entropy and a multi-layer neural network with trajectory data to identify transportation modes / Qingying Yu in International journal of geographical information science IJGIS, vol 35 n° 7 (July 2021)PermalinkUsing machine learning to map Western Australian landscapes for mineral exploration / Thomas Albrecht in ISPRS International journal of geo-information, vol 10 n° 7 (July 2021)PermalinkMarrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images / Guillemette Fonteix in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkAn automatic workflow for orientation of historical images with large radiometric and geometric differences / Ferdinand Maiwald in Photogrammetric record, vol 36 n° 174 (June 2021)PermalinkDeep learning in denoising of micro-computed tomography images of rock samples / Mikhail Sidorenko in Computers & geosciences, vol 151 (June 2021)PermalinkDomain adaptive transfer attack-based segmentation networks for building extraction from aerial images / Younghwan Na in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)PermalinkEfficient image dataset classification difficulty estimation for predicting deep-learning accuracy / Florian Scheidegger in The Visual Computer, vol 37 n° 6 (June 2021)PermalinkA high-resolution satellite DEM filtering method assisted with building segmentation / Yihui Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)Permalink