Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques > détection de contours
détection de contoursSynonyme(s)extraction de contourVoir aussi |
Documents disponibles dans cette catégorie (320)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : A context feature enhancement network for building extraction from high-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Jinzhi Chen, Auteur ; Dejun Zhang, Auteur ; Yiqi Wu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2276 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] structure-from-motionRésumé : (auteur) The complexity and diversity of buildings make it challenging to extract low-level and high-level features with strong feature representation by using deep neural networks in building extraction tasks. Meanwhile, deep neural network-based methods have many network parameters, which take up a lot of memory and time in training and testing. We propose a novel fully convolutional neural network called the Context Feature Enhancement Network (CFENet) to address these issues. CFENet comprises three modules: the spatial fusion module, the focus enhancement module, and the feature decoder module. First, the spatial fusion module aggregates the spatial information of low-level features to obtain buildings’ outline and edge information. Secondly, the focus enhancement module fully aggregates the semantic information of high-level features to filter the information of building-related attribute categories. Finally, the feature decoder module decodes the output of the above two modules to segment the buildings more accurately. In a series of experiments on the WHU Building Dataset and the Massachusetts Building Dataset, our CFENet balances efficiency and accuracy compared to the other four methods we compared, and achieves optimality on all five evaluation metrics: PA, PC, F1, IoU, and FWIoU. This indicates that CFENet can effectively enhance and fuse buildings’ low-level and high-level features, improving building extraction accuracy. Numéro de notice : A2022-385 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14092276 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.3390/rs14092276 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100663
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2276[article]Revising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)
[article]
Titre : Revising cadastral data on land boundaries using deep learning in image-based mapping Type de document : Article/Communication Auteurs : Bujar Fetai, Auteur ; Dejan Grigillo, Auteur ; Anka Lisec, Auteur Année de publication : 2022 Article en page(s) : n° 298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] cadastre étranger
[Termes IGN] cartographie cadastrale
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] données cadastrales
[Termes IGN] limite cadastrale
[Termes IGN] point d'appui
[Termes IGN] SlovénieRésumé : (auteur) One of the main concerns of land administration in developed countries is to keep the cadastral system up to date. The goal of this research was to develop an approach to detect visible land boundaries and revise existing cadastral data using deep learning. The convolutional neural network (CNN), based on a modified architecture, was trained using the Berkeley segmentation data set 500 (BSDS500) available online. This dataset is known for edge and boundary detection. The model was tested in two rural areas in Slovenia. The results were evaluated using recall, precision, and the F1 score—as a more appropriate method for unbalanced classes. In terms of detection quality, balanced recall and precision resulted in F1 scores of 0.60 and 0.54 for Ponova vas and Odranci, respectively. With lower recall (completeness), the model was able to predict the boundaries with a precision (correctness) of 0.71 and 0.61. When the cadastral data were revised, the low values were interpreted to mean that the lower the recall, the greater the need to update the existing cadastral data. In the case of Ponova vas, the recall value was less than 0.1, which means that the boundaries did not overlap. In Odranci, 21% of the predicted and cadastral boundaries overlapped. Since the direction of the lines was not a problem, the low recall value (0.21) was mainly due to overly fragmented plots. Overall, the automatic methods are faster (once the model is trained) but less accurate than the manual methods. For a rapid revision of existing cadastral boundaries, an automatic approach is certainly desirable for many national mapping and cadastral agencies, especially in developed countries. Numéro de notice : A2022-357 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11050298 Date de publication en ligne : 04/05/2022 En ligne : https://doi.org/10.3390/ijgi11050298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100562
in ISPRS International journal of geo-information > vol 11 n° 5 (May 2022) . - n° 298[article]Automatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles / Leilei Zhang in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)
[article]
Titre : Automatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles Type de document : Article/Communication Auteurs : Leilei Zhang, Auteur ; Guoxin Wang, Auteur ; Weijian Sun, Auteur Année de publication : 2022 Article en page(s) : pp 453 - 475 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] classification barycentrique
[Termes IGN] classification non dirigée
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] extraction automatique
[Termes IGN] image captée par drone
[Termes IGN] image oblique
[Termes IGN] modèle numérique de surface
[Termes IGN] orthophotocarte
[Termes IGN] précision géométrique (imagerie)Résumé : (auteur) This paper introduces a method based on centroid clustering and contour analysis to extract area and height measurements on buildings from the 3D model generated by oblique images. The method comprises three steps: (1) extract the contour plane from the fused data of the digital surface model (DSM) and digital orthophoto map (DOM); (2) identify building contour clusters based on the number of centroids contained in each category determined by mean-shift centroid clustering; (3) remove the mis-identified contours in a given building contour cluster by a contour analysis and obtain the geometric information of the building using map algebra. The proposed approach was tested against four datasets. Compared with other results, the detection has effective completeness, correctness, quality, and higher geometric accuracy. The maximum average relative error of building height and area extraction is less than 8%. The method is fast for a large-scale collection of building attributes and improves the applicability of oblique photography in GIS. Numéro de notice : A2022-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1937632 Date de publication en ligne : 14/06/2021 En ligne : https://doi.org/10.1080/13658816.2021.1937632 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100020
in International journal of geographical information science IJGIS > vol 36 n° 3 (March 2022) . - pp 453 - 475[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 079-2022031 SL Revue Centre de documentation Revues en salle Disponible Comparaison des images satellite et aériennes dans le domaine de la détection d’obstacles à la navigation aérienne et de leur mise à jour / Olivier de Joinville in XYZ, n° 170 (mars 2022)
[article]
Titre : Comparaison des images satellite et aériennes dans le domaine de la détection d’obstacles à la navigation aérienne et de leur mise à jour Type de document : Article/Communication Auteurs : Olivier de Joinville , Auteur ; Chloé Marcon, Auteur Année de publication : 2022 Article en page(s) : pp 36 - 44 Note générale : Bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] aéroport
[Termes IGN] analyse comparative
[Termes IGN] analyse diachronique
[Termes IGN] BD Topo
[Termes IGN] classification dirigée
[Termes IGN] classification orientée objet
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] classification pixellaire
[Termes IGN] contrôle qualité
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] extraction de la végétation
[Termes IGN] image Pléiades-HR
[Termes IGN] image Sentinel-MSI
[Termes IGN] mise à jour de base de données
[Termes IGN] modèle numérique de surface
[Termes IGN] Nice
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] orthoimage
[Termes IGN] plus proche voisin, algorithme du
[Termes IGN] QGIS
[Termes IGN] réalité de terrainRésumé : (Auteur) Le Service d’information aéronautique (SIA) est un service de la DGAC (Direction générale de l’aviation civile) qui publie et exploite des obstacles à la navigation aérienne afin de sécuriser les vols aux abords des aérodromes. L’article propose une étude comparative entre des données images aériennes (OrthoImages) et des données images satellite (Pléiades et Sentinel) dans les deux domaines suivants : détection d’obstacles (essentiellement végétation et bâtiments) ainsi que leur mise à jour. Il ressort que les images satellite, du fait de leur forte qualité radiométrique et géométrique, offrent un potentiel légèrement supérieur aux images aériennes pour le SIA. De futures études utilisant d’autres capteurs optiques, LiDAR et Radar et des moyens de contrôle plus exhaustifs, devront être menées pour confirmer cette tendance. Numéro de notice : A2022-225 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100191
in XYZ > n° 170 (mars 2022) . - pp 36 - 44[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2022011 RAB Revue Centre de documentation En réserve L003 Disponible Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)
[article]
Titre : Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests Type de document : Article/Communication Auteurs : Chong Zhang, Auteur ; Jiawei Zhou, Auteur ; Huiwen Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 874 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] échantillonnage de données
[Termes IGN] entropie
[Termes IGN] estimation quantitative
[Termes IGN] feuillu
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] peuplement mélangé
[Termes IGN] Pinophyta
[Termes IGN] segmentation d'imageRésumé : (auteur) High-resolution UAV imagery paired with a convolutional neural network approach offers significant advantages in accurately measuring forestry ecosystems. Despite numerous studies existing for individual tree crown delineation, species classification, and quantity detection, the comprehensive situation in performing the above tasks simultaneously has rarely been explored, especially in mixed forests. In this study, we propose a new method for individual tree segmentation and identification based on the improved Mask R-CNN. For the optimized network, the fusion type in the feature pyramid network is modified from down-top to top-down to shorten the feature acquisition path among the different levels. Meanwhile, a boundary-weighted loss module is introduced to the cross-entropy loss function Lmask to refine the target loss. All geometric parameters (contour, the center of gravity and area) associated with canopies ultimately are extracted from the mask by a boundary segmentation algorithm. The results showed that F1-score and mAP for coniferous species were higher than 90%, and that of broadleaf species were located between 75%–85.44%. The producer’s accuracy of coniferous forests was distributed between 0.8–0.95 and that of broadleaf ranged in 0.87–0.93; user’s accuracy of coniferous was distributed between 0.81–0.84 and that of broadleaf ranged in 0.71–0.76. The total number of trees predicted was 50,041 for the entire study area, with an overall error of 5.11%. The method under study is compared with other networks including U-net and YOLOv3. Results in this study show that the improved Mask R-CNN has more advantages in broadleaf canopy segmentation and number detection. Numéro de notice : A2022-168 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14040874 Date de publication en ligne : 11/02/2022 En ligne : https://doi.org/10.3390/rs14040874 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99793
in Remote sensing > vol 14 n° 4 (February-2 2022) . - n° 874[article]Building footprint extraction in Yangon city from monocular optical satellite image using deep learning / Hein Thura Aung in Geocarto international, vol 37 n° 3 ([01/02/2022])PermalinkA combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)PermalinkPCEDNet: a lightweight neural network for fast and interactive edge detection in 3D point clouds / Chems-Eddine Himeur in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)PermalinkThree-Dimensional point cloud analysis for building seismic damage information / Fan Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 2 (February 2022)PermalinkAutomatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi / Yafei Jing in Remote sensing, vol 14 n° 2 (January-2 2022)PermalinkDevelopment of object detectors for satellite images by deep learning / Alissa Kouraeva (2022)PermalinkPermalinkBuilding detection with convolutional networks trained with transfer learning / Simon Šanca in Geodetski vestnik, vol 65 n° 4 (December 2021 - February 2022)PermalinkMSegnet, a practical network for building detection from high spatial resolution images / Bo Yu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 12 (December 2021)PermalinkMulti-model estimation of forest canopy closure by using red edge bands based on Sentinel-2 images / Yiying Hua in Forests, vol 12 n° 12 (December 2021)Permalink