Descripteur
Documents disponibles dans cette catégorie (502)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
An improved multi-task pointwise network for segmentation of building roofs in airborne laser scanning point clouds / Chaoquan Zhang in Photogrammetric record, vol 37 n° 179 (September 2022)
[article]
Titre : An improved multi-task pointwise network for segmentation of building roofs in airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Chaoquan Zhang, Auteur ; Hongchao Fan, Auteur Année de publication : 2022 Article en page(s) : pp 260 - 284 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] analyse de groupement
[Termes IGN] apprentissage profond
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal récurrent
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] fusion de données
[Termes IGN] Norvège
[Termes IGN] Ransac (algorithme)
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] toitRésumé : (auteur) Roof plane segmentation is an essential step in the process of 3D building reconstruction from airborne laser scanning (ALS) point clouds. The existing approaches either rely on human intervention to select the appropriate input parameters for different data-sets or they are not automatic and efficient. To tackle these issues, an improved multi-task pointwise network is proposed to simultaneously segment instances (that is, individual roof planes) and semantics (that is, groups of roof planes with similar geometric shapes) in point clouds. PointNet++ is used as a backbone network to extract robust features in the first step. The features from semantics branch are then added to the instance branch to facilitate the learning of instance embeddings. After that, a feature fusion module is added to the semantics branch to acquire more discriminative features from the backbone network. To increase the accuracy of semantic predictions, fused semantic features of the points belonging to the same instance are aggregated together. Finally, a mean-shift clustering algorithm is employed on instance embeddings to produce the instance predictions. Furthermore, a new roof data-set (called RoofNTNU) is established by taking ALS point clouds as training data for automatic and more general segmentation. Experiments on the new roof data-set show that the method achieves promising segmentation results: the mean precision (mPrec) of 96.2% for the instance segmentation task and mean accuracy (mAcc) of 94.4% for the semantic segmentation task. Numéro de notice : A2022-936 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12420 Date de publication en ligne : 13/07/2022 En ligne : https://doi.org/10.1111/phor.12420 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102682
in Photogrammetric record > vol 37 n° 179 (September 2022) . - pp 260 - 284[article]Learning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)
[article]
Titre : Learning indoor point cloud semantic segmentation from image-level labels Type de document : Article/Communication Auteurs : Youcheng Song, Auteur ; Zhengxing Sun, Auteur ; Qian Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 3253 - 3265 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] apprentissage dirigé
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] image RVB
[Termes IGN] scène intérieure
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) The data-hungry nature of deep learning and the high cost of annotating point-level labels make it difficult to apply semantic segmentation methods to indoor point cloud scenes. Therefore, exploring how to make point cloud segmentation methods less rely on point-level labels is a promising research topic. In this paper, we introduce a weakly supervised framework for semantic segmentation on indoor point clouds. To reduce the labor cost in data annotation, we use image-level weak labels that only indicate the classes that appeared in the rendered images of point clouds. The experiments validate the effectiveness and scalability of our framework. Our segmentation results on both ScanNet and S3DIS datasets outperform the state-of-the-art method using a similar level of weak supervision. Numéro de notice : A2022-793 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-022-02569-0 Date de publication en ligne : 02/07/2022 En ligne : https://doi.org/10.1007/s00371-022-02569-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101917
in The Visual Computer > vol 38 n° 9 (September 2022) . - pp 3253 - 3265[article]Structured binary neural networks for image recognition / Bohan Zhuang in International journal of computer vision, vol 130 n° 9 (September 2022)
[article]
Titre : Structured binary neural networks for image recognition Type de document : Article/Communication Auteurs : Bohan Zhuang, Auteur ; Chunhua Shen, Auteur ; Mingkui Tan, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2081 - 2102 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] décomposition
[Termes IGN] détection d'objet
[Termes IGN] implémentation (informatique)
[Termes IGN] logique binaire
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In this paper, we propose to train binarized convolutional neural networks (CNNs) that are of significant importance for deploying deep learning to mobile devices with limited power capacity and computing resources. Previous works on quantizing CNNs often seek to approximate the floating-point information of weights and/or activations using a set of discrete values. Such methods, termed value approximation here, typically are built on the same network architecture of the full-precision counterpart. Instead, we take a new “structured approximation” view for network quantization — it is possible and valuable to exploit flexible architecture transformation when learning low-bit networks, which can achieve even better performance than the original networks in some cases. In particular, we propose a “group decomposition” strategy, termed GroupNet, which divides a network into desired groups. Interestingly, with our GroupNet strategy, each full-precision group can be effectively reconstructed by aggregating a set of homogeneous binary branches. We also propose to learn effective connections among groups to improve the representation capability. To improve the model capacity, we propose to dynamically execute sparse binary branches conditioned on input features while preserving the computational cost. More importantly, the proposed GroupNet shows strong flexibility for a few vision tasks. For instance, we extend the GroupNet for accurate semantic segmentation by embedding the rich context into the binary structure. The proposed GroupNet also shows strong performance on object detection. Experiments on image classification, semantic segmentation, and object detection tasks demonstrate the superior performance of the proposed methods over various quantized networks in the literature. Moreover, the speedup and runtime memory cost evaluation comparing with related quantization strategies is analyzed on GPU platforms, which serves as a strong benchmark for further research. Numéro de notice : A2022-637 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-022-01638-0 Date de publication en ligne : 22/06/2022 En ligne : https://doi.org/10.1007/s11263-022-01638-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101443
in International journal of computer vision > vol 130 n° 9 (September 2022) . - pp 2081 - 2102[article]3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)
[article]
Titre : 3D semantic scene completion: A survey Type de document : Article/Communication Auteurs : Luis Roldão, Auteur ; Raoul de Charette, Auteur ; Anne Verroust-Blondet, Auteur Année de publication : 2022 Article en page(s) : pp 1978 - 2005 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] effet de profondeur cinétique
[Termes IGN] image RVB
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelRésumé : (auteur) Semantic scene completion (SSC) aims to jointly estimate the complete geometry and semantics of a scene, assuming partial sparse input. In the last years following the multiplication of large-scale 3D datasets, SSC has gained significant momentum in the research community because it holds unresolved challenges. Specifically, SSC lies in the ambiguous completion of large unobserved areas and the weak supervision signal of the ground truth. This led to a substantially increasing number of papers on the matter. This survey aims to identify, compare and analyze the techniques providing a critical analysis of the SSC literature on both methods and datasets. Throughout the paper, we provide an in-depth analysis of the existing works covering all choices made by the authors while highlighting the remaining avenues of research. SSC performance of the SoA on the most popular datasets is also evaluated and analyzed. Numéro de notice : A2022-593 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01504-5 Date de publication en ligne : 06/06/2022 En ligne : http://dx.doi.org/10.1007/s11263-021-01504-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101296
in International journal of computer vision > vol 130 n° 8 (August 2022) . - pp 1978 - 2005[article]An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images / Kwanghun Choi in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
[article]
Titre : An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images Type de document : Article/Communication Auteurs : Kwanghun Choi, Auteur ; Wontaek LIM, Auteur ; Byungwoo Chang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 165 - 180 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] détection automatique
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] gestion forestière durable
[Termes IGN] image Streetview
[Termes IGN] inventaire de la végétation
[Termes IGN] segmentation sémantique
[Termes IGN] SéoulRésumé : (auteur) Tree species and canopy structural profile (‘tree profile’) are among the most critical environmental factors in determining urban ecosystem services such as climate and air quality control from urban trees. To accurately characterize a tree profile, the tree diameter, height, crown width, and height to the lowest live branch must be all measured, which is an expensive and time-consuming procedure. Recent advances in artificial intelligence aids to efficiently and accurately measure the aforementioned tree profile parameters. This can be particularly helpful if spatially extensive and accurate street-level images provided by Google (‘streetview’) or Kakao (‘roadview’) are utilized. We focused on street trees in Seoul, the capital city of South Korea, and suggested a novel approach to create a tree profile and inventory based on deep learning algorithms. We classified urban tree species using the YOLO (You Only Look Once), one of the most popular deep learning object detection algorithms, which provides an uncomplicated method of creating datasets with custom classes. We further utilized semantic segmentation algorithm and graphical analysis to estimate tree profile parameters by determining the relative location of the interface of tree and ground surface. We evaluated the performance of the model by comparing the estimated tree heights, diameters, and locations from the model with the field measurements as ground truth. The results are promising and demonstrate the potential of the method for creating urban street tree profile inventory. In terms of tree species classification, the method showed the mean average precision (mAP) of 0.564. When we used the ideal tree images, the method also reported the normalized root mean squared error (NRMSE) for the tree height, diameter at breast height (DBH), and distances from the camera to the trees as 0.24, 0.44, and 0.41. Numéro de notice : A2022-503 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.004 Date de publication en ligne : 22/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101001
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 165 - 180[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Filtering airborne LIDAR data by using fully convolutional networks / Abdullah Varlik in Survey review, vol 55 n° 388 (January 2023)PermalinkSTICC: a multivariate spatial clustering method for repeated geographic pattern discovery with consideration of spatial contiguity / Yuhao Kang in International journal of geographical information science IJGIS, vol 36 n° 8 (August 2022)PermalinkTracking annual dynamics of mangrove forests in mangrove National Nature Reserves of China based on time series Sentinel-2 imagery during 2016–2020 / Rong Zhang in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)PermalinkTransfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)PermalinkSegmentation and sampling method for complex polyline generalization based on a generative adversarial network / Jiawei Du in Geocarto international, vol 37 n° 14 ([20/07/2022])PermalinkGNSSseg, a statistical method for the segmentation of daily GNSS IWV time series / Annarosa Quarello in Remote sensing, vol 14 n° 14 (July-2 2022)PermalinkA lightweight network with attention decoder for real-time semantic segmentation / Kang Wang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkStreet-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkContext-aware network for semantic segmentation toward large-scale point clouds in urban environments / Chun Liu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)Permalink