Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (336)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A review of techniques for 3D reconstruction of indoor environments / Zhizhong Kang in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)
[article]
Titre : A review of techniques for 3D reconstruction of indoor environments Type de document : Article/Communication Auteurs : Zhizhong Kang, Auteur ; Juntao Yang, Auteur ; Zhou Yang, Auteur ; Sai Cheng, Auteur Année de publication : 2020 Article en page(s) : 31 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] espace intérieur
[Termes IGN] image RVB
[Termes IGN] indoorGML
[Termes IGN] jeu de données localisées
[Termes IGN] modèle géométrique
[Termes IGN] modèle sémantique de données
[Termes IGN] modèle topologique de données
[Termes IGN] reconstruction 3DRésumé : (auteur) Indoor environment model reconstruction has emerged as a significant and challenging task in terms of the provision of a semantically rich and geometrically accurate indoor model. Recently, there has been an increasing amount of research related to indoor environment reconstruction. Therefore, this paper reviews the state-of-the-art techniques for the three-dimensional (3D) reconstruction of indoor environments. First, some of the available benchmark datasets for 3D reconstruction of indoor environments are described and discussed. Then, data collection of 3D indoor spaces is briefly summarized. Furthermore, an overview of the geometric, semantic, and topological reconstruction of the indoor environment is presented, where the existing methodologies, advantages, and disadvantages of these three reconstruction types are analyzed and summarized. Finally, future research directions, including technique challenges and trends, are discussed for the purpose of promoting future research interest. It can be concluded that most of the existing indoor environment reconstruction methods are based on the strong Manhattan assumption, which may not be true in a real indoor environment, hence limiting the effectiveness and robustness of existing indoor environment reconstruction methods. Moreover, based on the hierarchical pyramid structures and the learnable parameters of deep-learning architectures, multi-task collaborative schemes to share parameters and to jointly optimize each other using redundant and complementary information from different perspectives show their potential for the 3D reconstruction of indoor environments. Furthermore, indoor–outdoor space seamless integration to achieve a full representation of both interior and exterior buildings is also heavily in demand. Numéro de notice : A2020-299 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9050330 Date de publication en ligne : 19/05/2020 En ligne : https://doi.org/10.3390/ijgi9050330 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95139
in ISPRS International journal of geo-information > vol 9 n° 5 (May 2020) . - 31 p.[article]Saliency-guided single shot multibox detector for target detection in SAR images / Lan Du in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)
[article]
Titre : Saliency-guided single shot multibox detector for target detection in SAR images Type de document : Article/Communication Auteurs : Lan Du, Auteur ; Lu Li, Auteur ; Di Wei, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 3366 - 3376 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] fusion de données
[Termes IGN] image radar moirée
[Termes IGN] saillanceRésumé : (auteur) The single shot multibox detector (SSD), a proposal-free method based on convolutional neural network (CNN), has recently been proposed for target detection and has found applications in synthetic aperture radar (SAR) images. Moreover, the saliency information reflected in the saliency map can highlight the target of interest while suppressing clutter, which is beneficial for better scene understanding. Therefore, in this article, we propose a saliency-guided SSD (S-SSD) for target detection in SAR images, in which we effectively integrate the saliency into the SSD network not only to suggest where to focus on but also to improve the representation capability in complex scenes. The proposed S-SSD contains two separated convolutional backbone subnetwork architectures, one with the original SAR image as input to extract features, and the other with the corresponding saliency map obtained from the modified Itti’s method as input to acquire refined saliency information under supervision. In addition, the dense connection structure, instead of the plain structure used in original SSD, is applied in the two convolutional backbone architectures to utilize multiscale information with fewer parameters. Then, for integrating saliency information to guide the network to emphasize informative regions, multilevel fusion modules are utilized to merge the two streams into a unified framework, thereby making the whole network end-to-end jointly trained. Finally, the convolutional predictors are used to predict targets. The experimental results on the miniSAR real data demonstrate that the proposed S-SSD can achieve better detection performance than state-of-the-art methods. Numéro de notice : A2020-237 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2953936 Date de publication en ligne : 11/12/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2953936 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94983
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 5 (May 2020) . - pp 3366 - 3376[article]Automated terrain feature identification from remote sensing imagery: a deep learning approach / Wenwen Li in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)
[article]
Titre : Automated terrain feature identification from remote sensing imagery: a deep learning approach Type de document : Article/Communication Auteurs : Wenwen Li, Auteur ; Chia-Yu Hsu, Auteur Année de publication : 2020 Article en page(s) : pp 637 - 660 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse du paysage
[Termes IGN] apprentissage profond
[Termes IGN] base de données d'images
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] détection automatique
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] intelligence artificielleRésumé : (auteur) Terrain feature detection is a fundamental task in terrain analysis and landscape scene interpretation. Discovering where a specific feature (i.e. sand dune, crater, etc.) is located and how it evolves over time is essential for understanding landform processes and their impacts on the environment, ecosystem, and human population. Traditional induction-based approaches are challenged by their inefficiency for generalizing diverse and complex terrain features as well as their performance for scalable processing of the massive geospatial data available. This paper presents a new deep learning (DL) approach to support automatic detection of terrain features from remotely sensed images. The novelty of this work lies in: (1) a terrain feature database containing 12,000 remotely sensed images (1,000 original images and 11,000 derived images from data augmentation) that supports data-driven model training and new discovery; (2) a DL-based object detection network empowered by ensemble learning and deep and deeper convolutional neural networks to achieve high-accuracy object detection; and (3) fine-tuning the model’s characteristics and behaviors to identify the best combination of hyperparameters and other network factors. The introduction of DL into geospatial applications is expected to contribute significantly to intelligent terrain analysis, landscape scene interpretation, and the maturation of spatial data science. Numéro de notice : A2020-108 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2018.1542697 Date de publication en ligne : 07/11/2018 En ligne : https://doi.org/10.1080/13658816.2018.1542697 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94708
in International journal of geographical information science IJGIS > vol 34 n° 4 (April 2020) . - pp 637 - 660[article]Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification / Congcong Wen in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)
[article]
Titre : Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification Type de document : Article/Communication Auteurs : Congcong Wen, Auteur ; Lina Yang, Auteur ; Xiang Li, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 50 - 62 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage automatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] fusion de données
[Termes IGN] plus proche voisin, algorithme du
[Termes IGN] précision de la classification
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement de semis de pointsRésumé : (auteur) Point cloud classification plays an important role in a wide range of airborne light detection and ranging (LiDAR) applications, such as topographic mapping, forest monitoring, power line detection, and road detection. However, due to the sensor noise, high redundancy, incompleteness, and complexity of airborne LiDAR systems, point cloud classification is challenging. Traditional point cloud classification methods mostly focus on the development of handcrafted point geometry features and employ machine learning-based classification models to conduct point classification. In recent years, the advances of deep learning models have caused researchers to shift their focus towards machine learning-based models, specifically deep neural networks, to classify airborne LiDAR point clouds. These learning-based methods start by transforming the unstructured 3D point sets to regular 2D representations, such as collections of feature images, and then employ a 2D CNN for point classification. Moreover, these methods usually need to calculate additional local geometry features, such as planarity, sphericity and roughness, to make use of the local structural information in the original 3D space. Nonetheless, the 3D to 2D conversion results in information loss. In this paper, we propose a directionally constrained fully convolutional neural network (D-FCN) that can take the original 3D coordinates and LiDAR intensity as input; thus, it can directly apply to unstructured 3D point clouds for semantic labeling. Specifically, we first introduce a novel directionally constrained point convolution (D-Conv) module to extract locally representative features of 3D point sets from the projected 2D receptive fields. To make full use of the orientation information of neighborhood points, the proposed D-Conv module performs convolution in an orientation-aware manner by using a directionally constrained nearest neighborhood search. Then, we design a multiscale fully convolutional neural network with downsampling and upsampling blocks to enable multiscale point feature learning. The proposed D-FCN model can therefore process input point cloud with arbitrary sizes and directly predict the semantic labels for all the input points in an end-to-end manner. Without involving additional geometry features as input, the proposed method demonstrates superior performance on the International Society for Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark dataset. The results show that our model achieves a new state-of-the-art performance on powerline, car, and facade categories. Moreover, to demonstrate the generalization abilities of the proposed method, we conduct further experiments on the 2019 Data Fusion Contest Dataset. Our proposed method achieves superior performance than the comparing methods and accomplishes an overall accuracy of 95.6% and an average F1 score of 0.810. Numéro de notice : A2020-119 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.02.004 Date de publication en ligne : 18/02/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.02.004 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94743
in ISPRS Journal of photogrammetry and remote sensing > vol 162 (April 2020) . - pp 50 - 62[article]Geocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)
[article]
Titre : Geocoding of trees from street addresses and street-level images Type de document : Article/Communication Auteurs : Daniel Laumer, Auteur ; Nico Lang, Auteur ; Natalie Van Doorn, Auteur Année de publication : 2020 Article en page(s) : pp 125 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse des correspondances
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'arbres
[Termes IGN] détection d'objet
[Termes IGN] géocodage par adresse postale
[Termes IGN] image panoramique
[Termes IGN] image Streetview
[Termes IGN] inventaire
[Termes IGN] service écosystémique
[Termes IGN] zone urbaineRésumé : (auteur) We introduce an approach for updating older tree inventories with geographic coordinates using street-level panorama images and a global optimization framework for tree instance matching. Geolocations of trees in inventories until the early 2000s where recorded using street addresses whereas newer inventories use GPS. Our method retrofits older inventories with geographic coordinates to allow connecting them with newer inventories to facilitate long-term studies on tree mortality etc. What makes this problem challenging is the different number of trees per street address, the heterogeneous appearance of different tree instances in the images, ambiguous tree positions if viewed from multiple images and occlusions. To solve this assignment problem, we (i) detect trees in Google street-view panoramas using deep learning, (ii) combine multi-view detections per tree into a single representation, (iii) and match detected trees with given trees per street address with a global optimization approach. Experiments for trees in 5 cities in California, USA, show that we are able to assign geographic coordinates to 38% of the street trees, which is a good starting point for long-term studies on the ecosystem services value of street trees at large scale. Numéro de notice : A2020-124 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.02.001 Date de publication en ligne : 21/02/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.02.001 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94749
in ISPRS Journal of photogrammetry and remote sensing > vol 162 (April 2020) . - pp 125 - 136[article]Multichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization / Puhong Duan in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkA Single Model CNN for Hyperspectral Image Denoising / Alessandro Maffei in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkStreet-Frontage-Net: urban image classification using deep convolutional neural networks / Stephen Law in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkUsing multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds / Zhou Guo in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkWhat, where, and how to transfer in SAR target recognition based on deep CNNs / Zhongling Huang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkClassification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network / Yueguan Yan in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)PermalinkDeep learning for geometric and semantic tasks in photogrammetry and remote sensing / Christian Helpke in Geo-spatial Information Science, vol 23 n° 1 (March 2020)PermalinkDeep SAR-Net: learning objects from signals / Zhongling Huang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)PermalinkEdge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery / Xiaoyan Lu in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)PermalinkPoststack seismic data denoising based on 3-D convolutional neural network / Dawei Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 3 (March 2020)Permalink