Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > segmentation > segmentation sémantique
segmentation sémantiqueSynonyme(s)étiquetage sémantique étiquetage de données |
Documents disponibles dans cette catégorie (204)



Etendre la recherche sur niveau(x) vers le bas
An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images / Kwanghun Choi in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
![]()
[article]
Titre : An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images Type de document : Article/Communication Auteurs : Kwanghun Choi, Auteur ; Wontaek LIM, Auteur ; Byungwoo Chang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 165 - 180 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] détection automatique
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] gestion forestière durable
[Termes IGN] image Streetview
[Termes IGN] inventaire de la végétation
[Termes IGN] segmentation sémantique
[Termes IGN] SéoulRésumé : (auteur) Tree species and canopy structural profile (‘tree profile’) are among the most critical environmental factors in determining urban ecosystem services such as climate and air quality control from urban trees. To accurately characterize a tree profile, the tree diameter, height, crown width, and height to the lowest live branch must be all measured, which is an expensive and time-consuming procedure. Recent advances in artificial intelligence aids to efficiently and accurately measure the aforementioned tree profile parameters. This can be particularly helpful if spatially extensive and accurate street-level images provided by Google (‘streetview’) or Kakao (‘roadview’) are utilized. We focused on street trees in Seoul, the capital city of South Korea, and suggested a novel approach to create a tree profile and inventory based on deep learning algorithms. We classified urban tree species using the YOLO (You Only Look Once), one of the most popular deep learning object detection algorithms, which provides an uncomplicated method of creating datasets with custom classes. We further utilized semantic segmentation algorithm and graphical analysis to estimate tree profile parameters by determining the relative location of the interface of tree and ground surface. We evaluated the performance of the model by comparing the estimated tree heights, diameters, and locations from the model with the field measurements as ground truth. The results are promising and demonstrate the potential of the method for creating urban street tree profile inventory. In terms of tree species classification, the method showed the mean average precision (mAP) of 0.564. When we used the ideal tree images, the method also reported the normalized root mean squared error (NRMSE) for the tree height, diameter at breast height (DBH), and distances from the camera to the trees as 0.24, 0.44, and 0.41. Numéro de notice : A2022-503 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.004 Date de publication en ligne : 22/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101001
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 165 - 180[article]Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
![]()
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]A lightweight network with attention decoder for real-time semantic segmentation / Kang Wang in The Visual Computer, vol 38 n° 7 (July 2022)
![]()
[article]
Titre : A lightweight network with attention decoder for real-time semantic segmentation Type de document : Article/Communication Auteurs : Kang Wang, Auteur ; Jinfu Yang, Auteur ; Shuai Yuan, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2329 - 2339 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] jeu de données
[Termes IGN] précision
[Termes IGN] segmentation sémantique
[Termes IGN] temps réel
[Termes IGN] vitesse de traitementRésumé : (auteur) As an important task in scene understanding, semantic segmentation requires a large amount of computation to achieve high performance. In recent years, with the rise of autonomous systems, it is crucial to make a trade-off in terms of accuracy and speed. In this paper, we propose a novel asymmetric encoder–decoder network structure to address this problem. In the encoder, we design a Separable Asymmetric Module, which combines depth-wise separable asymmetric convolution with dilated convolution to greatly reduce computation cost while maintaining accuracy. On the other hand, an attention mechanism is also used in the decoder to further improve segmentation performance. Experimental results on CityScapes and CamVid datasets show that the proposed method can achieve a better balance between segmentation precision and speed compared with state-of-the-art semantic segmentation methods. Specifically, our model obtains mean IoU of 72.5% and 66.3% on CityScapes and CamVid test dataset, respectively, with less than 1M parameters. Numéro de notice : A2022-508 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-021-02115-4 Date de publication en ligne : 07/05/2021 En ligne : https://doi.org/10.1007/s00371-021-02115-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101041
in The Visual Computer > vol 38 n° 7 (July 2022) . - pp 2329 - 2339[article]Street-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)
![]()
[article]
Titre : Street-view imagery guided street furniture inventory from mobile laser scanning point clouds Type de document : Article/Communication Auteurs : Yuzhou Zhou, Auteur ; Xu Han, Auteur ; Mingjun Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 63 - 77 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image Streetview
[Termes IGN] instance
[Termes IGN] inventaire
[Termes IGN] jeu de données localisées
[Termes IGN] masque
[Termes IGN] mobilier urbain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] séparateur à vaste marge
[Termes IGN] Shanghai (Chine)
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Outdated or sketchy inventory of street furniture may misguide the planners on the renovation and upgrade of transportation infrastructures, thus posing potential threats to traffic safety. Previous studies have taken their steps using point clouds or street-view imagery (SVI) for street furniture inventory, but there remains a gap to balance semantic richness, localization accuracy and working efficiency. Therefore, this paper proposes an effective pipeline that combines SVI and point clouds for the inventory of street furniture. The proposed pipeline encompasses three steps: (1) Off-the-shelf street furniture detection models are applied on SVI for generating two-dimensional (2D) proposals and then three-dimensional (3D) point cloud frustums are accordingly cropped; (2) The instance mask and the instance 3D bounding box are predicted for each frustum using a multi-task neural network; (3) Frustums from adjacent perspectives are associated and fused via multi-object tracking, after which the object-centric instance segmentation outputs the final street furniture with 3D locations and semantic labels. This pipeline was validated on datasets collected in Shanghai and Wuhan, producing component-level street furniture inventory of nine classes. The instance-level mean recall and precision reach 86.4%, 80.9% and 83.2%, 87.8% respectively in Shanghai and Wuhan, and the point-level mean recall, precision, weighted coverage all exceed 73.7%. Numéro de notice : A2022-403 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2022.04.023 Date de publication en ligne : 12/05/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.04.023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100711
in ISPRS Journal of photogrammetry and remote sensing > vol 189 (July 2022) . - pp 63 - 77[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 081-2022071 SL Revue Centre de documentation Revues en salle Disponible Extracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)
![]()
[article]
Titre : Extracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area Type de document : Article/Communication Auteurs : Siming Yin, Auteur ; Xian Guo, Auteur ; Jie Jiang, Auteur Année de publication : 2022 Article en page(s) : n° 326 Note générale : résumé Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image Streetview
[Termes IGN] paysage urbain
[Termes IGN] Pékin (Chine)
[Termes IGN] segmentation sémantique
[Termes IGN] site historiqueRésumé : (auteur) Accurate extraction of urban landscape features in the historic district of China is an essential task for the protection of the cultural and historical heritage. In recent years, deep learning (DL)-based methods have made substantial progress in landscape feature extraction. However, the lack of annotated data and the complex scenarios inside alleyways result in the limited performance of the available DL-based methods when extracting landscape features. To deal with this problem, we built a small yet comprehensive history-core street view (HCSV) dataset and propose a polarized attention-based landscape feature segmentation network (PALESNet) in this article. The polarized self-attention block is employed in PALESNet to discriminate each landscape feature in various situations, whereas the atrous spatial pyramid pooling (ASPP) block is utilized to capture the multi-scale features. As an auxiliary, a transfer learning module was introduced to supplement the knowledge of the network, to overcome the shortage of labeled data and improve its learning capability in the historic districts. Compared to other state-of-the-art methods, our network achieved the highest accuracy in the case study of Beijing Core Area, with an mIoU of 63.7% on the HCSV dataset; and thus could provide sufficient and accurate data for further protection and renewal in Chinese historic districts. Numéro de notice : A2022-410 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11060326 Date de publication en ligne : 28/05/2022 En ligne : https://doi.org/10.3390/ijgi11060326 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100760
in ISPRS International journal of geo-information > vol 11 n° 6 (June 2022) . - n° 326[article]Large-scale automatic identification of urban vacant land using semantic segmentation of high-resolution remote sensing images / Lingdong Mao in Landscape and Urban Planning, vol 222 (June 2022)
PermalinkAutomatic training data generation in deep learning-aided semantic segmentation of heritage buildings / Arnadi Murtiyoso in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkEffect of label noise in semantic segmentation of high resolution aerial images and height data / Arabinda Maiti in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkLearning from the past: crowd-driven active transfer learning for semantic segmentation of multi-temporal 3D point clouds / Michael Kölle in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkRailway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkSemantic segmentation of urban textured meshes through point sampling / Grégoire Grzeczkowicz in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkWeakly supervised semantic segmentation of airborne laser scanning point clouds / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
PermalinkAssessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data / Cheng-Chun Lee in Computers, Environment and Urban Systems, vol 93 (April 2022)
PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)
PermalinkExploring the association between street built environment and street vitality using deep learning methods / Yunqin Li in Sustainable Cities and Society, vol 79 (April 2022)
Permalink