Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (236)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Structure-aware indoor scene reconstruction via two levels of abstraction / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Structure-aware indoor scene reconstruction via two levels of abstraction Type de document : Article/Communication Auteurs : Hao Fang, Auteur ; Cihui Pan, Auteur ; Hui Huang, Auteur Année de publication : 2021 Article en page(s) : pp 155 - 170 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image optique
[Termes IGN] maillage
[Termes IGN] maille triangulaire
[Termes IGN] niveau d'abstraction
[Termes IGN] polygone
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) In this paper, we propose a novel approach that reconstructs the indoor scene in a structure-aware manner and produces two meshes with different levels of abstraction. To be precise, we start from the raw triangular mesh of indoor scene and decompose it into two parts: structure and non-structure objects. On the one hand, structure objects are defined as significant permanent parts in the indoor environment such as floors, ceilings and walls. In the proposed algorithm, structure objects are abstracted by planar primitives and assembled into a polygonal structure mesh. This step produces a compact structure-aware watertight model that decreases the complexity of original mesh by three orders of magnitude. On the other hand, non-structure objects are movable objects in the indoor environment such as furniture and interior decoration. Meshes of these objects are repaired and simplified according to their relationship with respect to structure primitives. Finally, the union of all the non-structure meshes and structure mesh comprises the scene mesh. Note that structure mesh and scene mesh preserve various levels of abstraction and can be used for different applications according to user preference. Our experiments on both LIDAR and RGBD data scanned from simple to large scale indoor scenes indicate that the proposed framework generates structure-aware results while being robust and scalable. It is also compared qualitatively and quantitatively against popular mesh approximation, floorplan generation and piecewise-planar surface reconstruction methods to demonstrate its performance. Numéro de notice : A2021-561 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.007 Date de publication en ligne : 23/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98119
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 155 - 170[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network / Fengpeng Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 8 (August 2021)
[article]
Titre : Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network Type de document : Article/Communication Auteurs : Fengpeng Li, Auteur ; Jiabao Li, Auteur ; Wei Han, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 577 - 591 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal
[Termes IGN] grande échelle
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] moyenne échelle
[Termes IGN] petite échelle
[Termes IGN] régression linéaire
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set. Numéro de notice : A2021-670 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.8.577 Date de publication en ligne : 01/08/2021 En ligne : https://doi.org/10.14358/PERS.87.8.577 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98806
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 8 (August 2021) . - pp 577 - 591[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021081 SL Revue Centre de documentation Revues en salle Disponible Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area / Xungen Li in ISPRS International journal of geo-information, vol 10 n° 8 (August 2021)
[article]
Titre : Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area Type de document : Article/Communication Auteurs : Xungen Li, Auteur ; Feifei Men, Auteur ; Shuaishuai Lv, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 549 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] véhiculeRésumé : (auteur) Vehicle detection in aerial images is a challenging task. The complexity of the background information and the redundancy of the detection area are the main obstacles that limit the successful operation of vehicle detection based on anchors in very-high-resolution (VHR) remote sensing images. In this paper, an anchor-free target detection method is proposed to solve the problems above. First, a multi-attention feature pyramid network (MA-FPN) was designed to address the influence of noise and background information on vehicle target detection by fusing attention information in the feature pyramid network (FPN) structure. Second, a more precise foveal area (MPFA) is proposed to provide better ground truth for the anchor-free method by determining a more accurate positive sample selection area. The proposed anchor-free model with MA-FPN and MPFA can predict vehicles accurately and quickly in VHR remote sensing images through direct regression and predict the pixels in the feature map. A detailed evaluation based on remote sensing image (RSI) and vehicle detection in aerial imagery (VEDAI) data sets for vehicle detection shows that our detection method performs well, the network is simple, and the detection is fast. Numéro de notice : A2021-589 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10080549 Date de publication en ligne : 14/08/2021 En ligne : https://doi.org/10.3390/ijgi10080549 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98209
in ISPRS International journal of geo-information > vol 10 n° 8 (August 2021) . - n° 549[article]Detail injection-based deep convolutional neural networks for pansharpening / Liang-Jian Deng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : Detail injection-based deep convolutional neural networks for pansharpening Type de document : Article/Communication Auteurs : Liang-Jian Deng, Auteur ; Gemine Vivone, Auteur ; Cheng Jin, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6995 - 7010 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image à basse résolution
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] injection d'image
[Termes IGN] modèle non linéaire
[Termes IGN] pansharpening (fusion d'images)Résumé : (auteur) The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods. Numéro de notice : A2021-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3031366 En ligne : https://doi.org/10.1109/TGRS.2020.3031366 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98293
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6995 - 7010[article]CNN-based RGB-D salient object detection: Learn, select, and fuse / Hao Chen in International journal of computer vision, vol 129 n° 7 (July 2021)
[article]
Titre : CNN-based RGB-D salient object detection: Learn, select, and fuse Type de document : Article/Communication Auteurs : Hao Chen, Auteur ; Yongjian Deng, Auteur ; Guosheng Lin, Auteur Année de publication : 2021 Article en page(s) : pp 2076 - 2096 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] profondeur
[Termes IGN] saillance
[Termes IGN] segmentation sémantiqueRésumé : (auteur) The goal of this work is to present a systematic solution for RGB-D salient object detection, which addresses the following three aspects with a unified framework: modal-specific representation learning, complementary cue selection, and cross-modal complement fusion. To learn discriminative modal-specific features, we propose a hierarchical cross-modal distillation scheme, in which we use the progressive predictions from the well-learned source modality to supervise learning feature hierarchies and inference in the new modality. To better select complementary cues, we formulate a residual function to incorporate complements from the paired modality adaptively. Furthermore, a top-down fusion structure is constructed for sufficient cross-modal cross-level interactions. The experimental results demonstrate the effectiveness of the proposed cross-modal distillation scheme in learning from a new modality, the advantages of the proposed multi-modal fusion pattern in selecting and fusing cross-modal complements, and the generalization of the proposed designs in different tasks. Numéro de notice : A2021-697 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01452-0 Date de publication en ligne : 05/05/2021 En ligne : https://doi.org/10.1007/s11263-021-01452-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98532
in International journal of computer vision > vol 129 n° 7 (July 2021) . - pp 2076 - 2096[article]Flood depth mapping in street photos with image processing and deep neural networks / Bahareh Alizadeh Kharazi in Computers, Environment and Urban Systems, vol 88 (July 2021)PermalinkA hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases / Chun Yang in ISPRS Journal of photogrammetry and remote sensing, vol 177 (July 2021)PermalinkRemote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)PermalinkSemiCDNet: A semisupervised convolutional neural network for change detection in high resolution remote-sensing images / Daifeng Peng in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkThree-dimensional reconstruction of single input image based on point cloud / Yu Hou in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 7 (July 2021)PermalinkTrajectory and image-based detection and identification of UAV / Yicheng Liu in The Visual Computer, vol 37 n° 7 (July 2021)PermalinkUsing information entropy and a multi-layer neural network with trajectory data to identify transportation modes / Qingying Yu in International journal of geographical information science IJGIS, vol 35 n° 7 (July 2021)PermalinkUsing machine learning to map Western Australian landscapes for mineral exploration / Thomas Albrecht in ISPRS International journal of geo-information, vol 10 n° 7 (July 2021)PermalinkMarrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images / Guillemette Fonteix in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkA framework for classification of volunteered geographic data based on user’s need / Nazila Mohammadi in Geocarto international, vol 36 n° 11 ([15/06/2021])PermalinkAn automatic workflow for orientation of historical images with large radiometric and geometric differences / Ferdinand Maiwald in Photogrammetric record, vol 36 n° 174 (June 2021)PermalinkDeep learning in denoising of micro-computed tomography images of rock samples / Mikhail Sidorenko in Computers & geosciences, vol 151 (June 2021)PermalinkDomain adaptive transfer attack-based segmentation networks for building extraction from aerial images / Younghwan Na in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)PermalinkEfficient image dataset classification difficulty estimation for predicting deep-learning accuracy / Florian Scheidegger in The Visual Computer, vol 37 n° 6 (June 2021)PermalinkA high-resolution satellite DEM filtering method assisted with building segmentation / Yihui Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)PermalinkMask R-CNN-based building extraction from VHR satellite data in operational humanitarian action: An example related to Covid-19 response in Khartoum, Sudan / Dirk Tiede in Transactions in GIS, Vol 25 n° 3 (June 2021)PermalinkMultiscale cloud detection in remote sensing images using a dual convolutional neural network / Markku Luotamo in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)PermalinkReconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)PermalinkResolution enhancement for large-scale land cover mapping via weakly supervised deep learning / Qiutong Yu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)PermalinkA deep learning model using satellite ocean color and hydrodynamic model to estimate chlorophyll-a concentration / Daeyong Jin in Remote sensing, vol 13 n°10 (May-2 2021)Permalink