Descripteur
Documents disponibles dans cette catégorie (145)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Extraction from high-resolution remote sensing images based on multi-scale segmentation and case-based reasoning / Jun Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 3 (March 2022)
[article]
Titre : Extraction from high-resolution remote sensing images based on multi-scale segmentation and case-based reasoning Type de document : Article/Communication Auteurs : Jun Xu, Auteur ; Jiasong Li, Auteur ; Hao Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 199 - 205 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification barycentrique
[Termes IGN] distance de Kullback-Leibler
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image Worldview
[Termes IGN] masque
[Termes IGN] occupation du sol
[Termes IGN] segmentation d'image
[Termes IGN] segmentation multi-échelle
[Termes IGN] séparateur à vaste margeRésumé : (auteur) In object-oriented information extraction from high-resolution remote sensing images, the segmentation and classification of images involves considerable manual participation, which limits the development of automation and intelligence for these purposes. Based on the multi-scale segmentation strategy and case-based reasoning, a new method for extracting high-resolution remote sensing image information by fully using the image and nonimage features of the case object is proposed. Feature selection and weight learning are used to construct a multi-level and multi-layer case library model of surface cover classification reasoning. Combined with image mask technology, this method is applied to extract surface cover classification information from remote sensing images using different sensors, time, and regions. Finally, through evaluation of the extraction and recognition rates, the accuracy and effectiveness of this method was verified. Numéro de notice : A2022-202 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.20-00104R3 Date de publication en ligne : 01/03/2022 En ligne : https://doi.org/10.14358/PERS.20-00104R3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100006
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 3 (March 2022) . - pp 199 - 205[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022031 SL Revue Centre de documentation Revues en salle Disponible Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification / Yuntao Liu in The Visual Computer, vol 38 n° 3 (March 2022)
[article]
Titre : Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification Type de document : Article/Communication Auteurs : Yuntao Liu, Auteur ; Yong Dou, Auteur ; Ruochun Jin, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 897 - 917 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] classification bayesienne
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In this paper, a hierarchical learning algorithm based on the Bayesian Neural Network classifier with backtracking is proposed to support large-scale image classification, where a Visual Confusion Label Tree is established for constructing a hierarchical structure for large numbers of categories in image datasets and determining the hierarchical learning tasks automatically. Specifically, the Visual Confusion Label Tree is established based on outputs of convolution neural network models. One parent node on the Visual Confusion Label Tree contains a set of sibling coarse-grained categories, and child nodes have several sets of fine-grained categories which are partitions of categories on the parent node. The proposed Hierarchical Bayesian Neural Network with backtracking algorithm can benefit from the hierarchical structure of the Visual Confusion Label Tree. Focusing on those confusion subsets instead of the entire set of categories makes the classification ability of the tree classifier stronger. The backtracking algorithm can utilize the uncertainty information captured from the Bayesian Neural Network to make a second classification to re-correct samples that were classified incorrectly in the previous classification process. Experiments on four large-scale datasets show that our tree classifier obtains a significant improvement over the state-of-the-art tree classifier, which have demonstrated the discriminative hierarchical structure of our Visual Confusion Label Tree and the effectiveness of our Hierarchical Bayesian Neural Network with backtracking algorithm. Numéro de notice : A2022-149 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-021-02058-w Date de publication en ligne : 04/02/2021 En ligne : http://dx.doi.org/10.1007/s00371-021-02058-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100070
in The Visual Computer > vol 38 n° 3 (March 2022) . - pp 897 - 917[article]Towards low vegetation identification: A new method for tree crown segmentation from LiDAR data based on a symmetrical structure detection algorithm (SSD) / Langning Huo in Remote sensing of environment, vol 270 (March 2022)
[article]
Titre : Towards low vegetation identification: A new method for tree crown segmentation from LiDAR data based on a symmetrical structure detection algorithm (SSD) Type de document : Article/Communication Auteurs : Langning Huo, Auteur ; Eva Lindberg, Auteur ; Johan Holmgren, Auteur Année de publication : 2022 Article en page(s) : n° 112857 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] forêt boréale
[Termes IGN] hauteur à la base du houppier
[Termes IGN] houppier
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] sous-bois
[Termes IGN] sous-étage
[Termes IGN] strate végétale
[Termes IGN] structure d'un peuplement forestier
[Termes IGN] SuèdeRésumé : (auteur) Obtaining low vegetation data is important in order to quantify the structural characteristics of a forest. Dense three-dimensional (3D) laser scanning data can provide information on the vertical profile of a forest. However, most studies have focused on the dominant and subdominant layers of the forest, while few studies have tried to delineate the low vegetation. To address this issue, we propose a framework for individual tree crown (ITC) segmentation from laser data that focuses on both overstory and understory trees. The framework includes 1) a new algorithm (SSD) for 3D ITC segmentation of dominant trees, by detecting the symmetrical structure of the trees, and 2) removing points of dominant trees and mean shift clustering of the low vegetation. The framework was tested on a boreal forest in Sweden and the performance was compared 1) between plots with different stem density levels, vertical complexities, and tree species composition, and 2) using airborne laser scanning (ALS) data, terrestrial laser scanning (TLS) data, and merged ALS and TLS data (ALS + TLS data). The proposed framework achieved detection rates of 0.87 (ALS + TLS), 0.86 (TLS), and 0.76 (ALS) when validated with field-inventory data (of trees with a diameter at breast height ≥ 4 cm). When validating the estimated number of understory trees by visual interpretation, the framework achieved 19%, 21%, and 39% root-mean-square error values with ALS + TLS, TLS, and ALS data, respectively. These results show that the SSD algorithm can successfully separate laser points of overstory and understory trees, ensuring the detection and segmentation of low vegetation in forest. The proposed framework can be used with both ALS and TLS data, and achieve ITC segmentation for forests with various structural attributes. The results also illustrate the potential of using ALS data to delineate low vegetation. Numéro de notice : A2022-127 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.rse.2021.112857 Date de publication en ligne : 03/01/2022 En ligne : https://doi.org/10.1016/j.rse.2021.112857 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99707
in Remote sensing of environment > vol 270 (March 2022) . - n° 112857[article]Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)
[article]
Titre : Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach Type de document : Article/Communication Auteurs : Linyuan Li, Auteur ; Xihan Mu, Auteur ; Francesco Chianucci, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102686 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme SLIC
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] Chine
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] faisceau laser
[Termes IGN] forêt boréale
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] modèle numérique de terrain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] sous-étage
[Termes IGN] structure-from-motionRésumé : (auteur) Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). Numéro de notice : A2022-192 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102686 Date de publication en ligne : 05/02/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99951
in International journal of applied Earth observation and geoinformation > vol 107 (March 2022) . - n° 102686[article]Visual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)
[article]
Titre : Visual vs internal attention mechanisms in deep neural networks for image classification and object detection Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Mireya S. García Vázquez, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108411 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] oculométrie
[Termes IGN] saillance
[Termes IGN] segmentation sémantique
[Termes IGN] visualisation de donnéesRésumé : (auteur) The so-called “attention mechanisms” in Deep Neural Networks (DNNs) denote an automatic adaptation of DNNs to capture representative features given a specific classification task and related data. Such attention mechanisms perform both globally by reinforcing feature channels and locally by stressing features in each feature map. Channel and feature importance are learnt in the global end-to-end DNNs training process. In this paper, we present a study and propose a method with a different approach, adding supplementary visual data next to training images. We use human visual attention maps obtained independently with psycho-visual experiments, both in task-driven or in free viewing conditions, or powerful models for prediction of visual attention maps. We add visual attention maps as new data alongside images, thus introducing human visual attention into the DNNs training and compare it with both global and local automatic attention mechanisms. Experimental results show that known attention mechanisms in DNNs work pretty much as human visual attention, but still the proposed approach allows a faster convergence and better performance in image classification tasks. Numéro de notice : A2022-197 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2021.108411 Date de publication en ligne : 12/11/2021 En ligne : https://doi.org/10.1016/j.patcog.2021.108411 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99988
in Pattern recognition > vol 123 (March 2022) . - n° 108411[article]A method of vision aided GNSS positioning using semantic information in complex urban environment / Rui Zhai in Remote sensing, vol 14 n° 4 (February-2 2022)Permalink3D modeling of urban area based on oblique UAS images - An end-to-end pipeline / Valeria-Ersilia Oniga in Remote sensing, vol 14 n° 2 (January-2 2022)PermalinkSemantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network / Ekrem Saralioglu in Geocarto international, vol 37 n° 2 ([15/01/2022])PermalinkPermalinkAttributing pedestrian networks with semantic information based on multi-source spatial data / Xue Yang in International journal of geographical information science IJGIS, vol 36 n° 1 (January 2022)PermalinkConstruction d’un plugin QGIS de détection d’îlots de chaleur urbains à partir d’images satellitaires de type optique / Houssayn Meriche (2022)PermalinkContribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)PermalinkPermalinkPermalinkPermalink