Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > segmentation d'image
segmentation d'imageVoir aussi |
Documents disponibles dans cette catégorie (605)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Évaluation des apports de l’apprentissage profond au sein d’un service dédié à la numérisation du patrimoine / Maxime Mérizette in XYZ, n° 170 (mars 2022)
[article]
Titre : Évaluation des apports de l’apprentissage profond au sein d’un service dédié à la numérisation du patrimoine Type de document : Article/Communication Auteurs : Maxime Mérizette, Auteur Année de publication : 2022 Article en page(s) : pp 61 - 65 Note générale : Bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] jeu de données localisées
[Termes IGN] modélisation 3D du bâti BIM
[Termes IGN] qualité des données
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (Auteur) Les scanners laser terrestres permettent d’acquérir beaucoup de données tout en présentant une rapidité et une facilité d’acquisition. Mais ceci est terni par le manque d’automatisation des traitements de nuages de points. La segmentation de nuage de points, consistant à extraire les éléments constitutifs d’un nuage, pâtit notamment de ce manque. Ce travail de fin d’études d’ingénieur, mené chez Quarta, se concentre sur les apports de l’apprentissage profond pour la réalisation d’une segmentation de nuage de points. Elle se propose de lister les différentes méthodes d’apprentissage profond permettant de travailler sur les nuages de points et teste différents algorithmes permettant de traiter les nuages de points volumineux. Numéro de notice : A2022-226 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100192
in XYZ > n° 170 (mars 2022) . - pp 61 - 65[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2022011 RAB Revue Centre de documentation En réserve L003 Disponible Extraction from high-resolution remote sensing images based on multi-scale segmentation and case-based reasoning / Jun Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 3 (March 2022)
[article]
Titre : Extraction from high-resolution remote sensing images based on multi-scale segmentation and case-based reasoning Type de document : Article/Communication Auteurs : Jun Xu, Auteur ; Jiasong Li, Auteur ; Hao Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 199 - 205 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification barycentrique
[Termes IGN] distance de Kullback-Leibler
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image Worldview
[Termes IGN] masque
[Termes IGN] occupation du sol
[Termes IGN] segmentation d'image
[Termes IGN] segmentation multi-échelle
[Termes IGN] séparateur à vaste margeRésumé : (auteur) In object-oriented information extraction from high-resolution remote sensing images, the segmentation and classification of images involves considerable manual participation, which limits the development of automation and intelligence for these purposes. Based on the multi-scale segmentation strategy and case-based reasoning, a new method for extracting high-resolution remote sensing image information by fully using the image and nonimage features of the case object is proposed. Feature selection and weight learning are used to construct a multi-level and multi-layer case library model of surface cover classification reasoning. Combined with image mask technology, this method is applied to extract surface cover classification information from remote sensing images using different sensors, time, and regions. Finally, through evaluation of the extraction and recognition rates, the accuracy and effectiveness of this method was verified. Numéro de notice : A2022-202 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.20-00104R3 Date de publication en ligne : 01/03/2022 En ligne : https://doi.org/10.14358/PERS.20-00104R3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100006
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 3 (March 2022) . - pp 199 - 205[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022031 SL Revue Centre de documentation Revues en salle Disponible Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)
[article]
Titre : Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach Type de document : Article/Communication Auteurs : Linyuan Li, Auteur ; Xihan Mu, Auteur ; Francesco Chianucci, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102686 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme SLIC
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] Chine
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] faisceau laser
[Termes IGN] forêt boréale
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] modèle numérique de terrain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] sous-étage
[Termes IGN] structure-from-motionRésumé : (auteur) Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). Numéro de notice : A2022-192 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102686 Date de publication en ligne : 05/02/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99951
in International journal of applied Earth observation and geoinformation > vol 107 (March 2022) . - n° 102686[article]Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)
[article]
Titre : Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests Type de document : Article/Communication Auteurs : Chong Zhang, Auteur ; Jiawei Zhou, Auteur ; Huiwen Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 874 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] échantillonnage de données
[Termes IGN] entropie
[Termes IGN] estimation quantitative
[Termes IGN] feuillu
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] peuplement mélangé
[Termes IGN] Pinophyta
[Termes IGN] segmentation d'imageRésumé : (auteur) High-resolution UAV imagery paired with a convolutional neural network approach offers significant advantages in accurately measuring forestry ecosystems. Despite numerous studies existing for individual tree crown delineation, species classification, and quantity detection, the comprehensive situation in performing the above tasks simultaneously has rarely been explored, especially in mixed forests. In this study, we propose a new method for individual tree segmentation and identification based on the improved Mask R-CNN. For the optimized network, the fusion type in the feature pyramid network is modified from down-top to top-down to shorten the feature acquisition path among the different levels. Meanwhile, a boundary-weighted loss module is introduced to the cross-entropy loss function Lmask to refine the target loss. All geometric parameters (contour, the center of gravity and area) associated with canopies ultimately are extracted from the mask by a boundary segmentation algorithm. The results showed that F1-score and mAP for coniferous species were higher than 90%, and that of broadleaf species were located between 75%–85.44%. The producer’s accuracy of coniferous forests was distributed between 0.8–0.95 and that of broadleaf ranged in 0.87–0.93; user’s accuracy of coniferous was distributed between 0.81–0.84 and that of broadleaf ranged in 0.71–0.76. The total number of trees predicted was 50,041 for the entire study area, with an overall error of 5.11%. The method under study is compared with other networks including U-net and YOLOv3. Results in this study show that the improved Mask R-CNN has more advantages in broadleaf canopy segmentation and number detection. Numéro de notice : A2022-168 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14040874 Date de publication en ligne : 11/02/2022 En ligne : https://doi.org/10.3390/rs14040874 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99793
in Remote sensing > vol 14 n° 4 (February-2 2022) . - n° 874[article]Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)
[article]
Titre : Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation Type de document : Article/Communication Auteurs : Ramazan Unlu, Auteur ; Recep Kiriş, Auteur Année de publication : 2022 Article en page(s) : pp 685 - 694 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] bâtiment
[Termes IGN] classification par nuées dynamiques
[Termes IGN] détection de changement
[Termes IGN] dommage matériel
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation d'image
[Termes IGN] séismeRésumé : (auteur) Detecting damaged buildings after an earthquake as quickly as possible is important for emergency teams to reach these buildings and save the lives of many people. Today, damaged buildings after the earthquake are carried out by the survivors contacting the authorities or using some air vehicles such as helicopters. In this study, AI-based systems were tested to detect damaged or destroyed buildings by integrating into street camera systems after unexpected disasters. For this purpose, we have used VGG-16, VGG-19, and NASNet convolutional neural network models which are often used for image recognition problems in the literature to detect damaged buildings. In order to effectively implement these models, we have first segmented all the images with the K-means clustering algorithm. Thereafter, for the first phase of this study, segmented images labeled “damaged buildings” and “normal” were classified and the VGG-19 model was the most successful model with a 90% accuracy in the test set. Besides, as the second phase of the study, we have created a multiclass classification problem by labeling segmented images as “damaged buildings,” “less damaged buildings,” and “normal.” The same three architectures are used to achieve the most accurate classification results on the test set. VGG-19 and VGG-16, and NASNet have achieved considerable success in the test set with about 70%, 67%, and 62% accuracy, respectively. Numéro de notice : A2022-145 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-020-02043-9 Date de publication en ligne : 03/01/2022 En ligne : https://doi.org/10.1007/s00371-020-02043-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100039
in The Visual Computer > vol 38 n° 2 (February 2022) . - pp 685 - 694[article]Synergistic use of particle swarm optimization, artificial neural network, and extreme gradient boosting algorithms for urban LULC mapping from WorldView-3 images / Alireza Hamedianfar in Geocarto international, vol 37 n° 3 ([01/02/2022])PermalinkAirborne LiDAR and high resolution multispectral data integration in Eucalyptus tree species mapping in an Australian farmscape / Niva Kiran Verma in Geocarto international, vol 37 n° 1 ([01/01/2022])PermalinkAnalysis of pedestrian movements and gestures using an on-board camera to predict their intentions / Joseph Gesnouin (2022)PermalinkBuyTheDips : PathLoss for improved topology-preserving deep learning-based image segmentation / Minh On Vu Ngoc (2022)PermalinkDeep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles / Zhujun Xu (2022)PermalinkÉléments pour l'analyse et le traitement d'images : application à l'estimation de la qualité du bois / Rémy Decelle (2022)PermalinkExploring data fusion for multi-object detection for intelligent transportation systems using deep learning / Amira Mimouna (2022)PermalinkMapping burned areas and land-uses in Kangaroo Island using an object-based image classification framework and Landsat 8 Imagery from Google Earth Engine / Jiyu Liu in Geomatics, Natural Hazards and Risk, vol 13 (2022)PermalinkMLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images / Majedaldein Almahasneh in Machine Vision and Applications, vol 33 n° 1 (January 2022)PermalinkMonitoring grassland dynamics by exploiting multi-modal satellite image time series / Anatol Garioud (2022)Permalink