Descripteur
Documents disponibles dans cette catégorie (136)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Utility-pole detection based on interwoven column generation from terrestrial mobile Laser scanner data / Siamak Talebi Nahr in Photogrammetric record, Vol 36 n° 176 (December 2021)
[article]
Titre : Utility-pole detection based on interwoven column generation from terrestrial mobile Laser scanner data Type de document : Article/Communication Auteurs : Siamak Talebi Nahr, Auteur ; Mohammad Saadatseresht, Auteur Année de publication : 2021 Article en page(s) : pp 402 - 424 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] équipement collectif
[Termes IGN] exactitude des données
[Termes IGN] exhaustivité des données
[Termes IGN] lidar mobile
[Termes IGN] mur
[Termes IGN] objet géographique complexe
[Termes IGN] objet géographique urbain
[Termes IGN] partitionnement par bloc
[Termes IGN] qualité des données
[Termes IGN] réseau électrique
[Termes IGN] scène urbaineRésumé : (Auteur) Mobile lidar scanning is one of the recent technologies that is used to map street scenes rapidly. Among street objects, utility-poles are more critical to energy companies to monitor regularly through time. This paper presents a novel approach to detect utility-poles from mobile lidar data in complex city scenes. After removing ground points, the scene is gridded into blocks based on a shared-partitioning algorithm. Next, an interwoven column generation algorithm is used to create columns. Finally, each of these columns is considered to be a utility-pole or not. The proposed algorithm is tested on two test areas. The algorithm achieved Completeness, Correctness and Quality of 92.8%, 97.5% and 90.6% in Area 1, and 92.8%, 92.2% and 86.1% in Area 2. The total number of utility-poles in both areas was 265. The algorithm shows promising results in utility-pole detection in complex city scenes with attached walls. Numéro de notice : A2021-916 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/phor.12394 Date de publication en ligne : 10/12/2021 En ligne : https://doi.org/10.1111/phor.12394 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99331
in Photogrammetric record > Vol 36 n° 176 (December 2021) . - pp 402 - 424[article]Metaheuristics for the positioning of 3D objects based on image analysis of complementary 2D photographs / Arnaud Flori in Machine Vision and Applications, vol 32 n° 5 (September 2021)
[article]
Titre : Metaheuristics for the positioning of 3D objects based on image analysis of complementary 2D photographs Type de document : Article/Communication Auteurs : Arnaud Flori, Auteur ; Hamouche Oulhadj, Auteur ; Patrick Siarry, Auteur Année de publication : 2021 Article en page(s) : n° 105 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme du recuit simulé
[Termes IGN] analyse d'image orientée objet
[Termes IGN] contour
[Termes IGN] image 2D
[Termes IGN] modélisation 3D
[Termes IGN] optimisation par essaim de particules
[Termes IGN] scène 3D
[Termes IGN] triangulationRésumé : (auteur) Today, advances in 3D modeling make it possible to identically reproduce objects, animals, humans and even entire scenes. The broad applications concern video games, virtual reality or augmented reality and cinema, for example. In this article, we propose a new method to build a 3D scene directly from several complementary photographs. The positions of the objects for which we already have a 3D model will be determined by triangulation, thanks to the information extracted from the photographs, such as the outline of the objects on the images. Each pixel of the images is converted into a value that gives its distance to the nearest outline. The 3D model of the objects is then projected on the converted images, and the triangulation is done using a cost function that gives the distance of each projection of the objects to their respective outlines. A projection is considered perfect when its distance to its outlines is null, which means that the cost function gives a score of zero as well. We propose to solve this optimization problem by means of two algorithms, namely Simulated Annealing (SA) and quantum particle swarm optimization (QUAPSO). Numéro de notice : A2021-868 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01229-y Date de publication en ligne : 03/08/2021 En ligne : https://doi.org/10.1007/s00138-021-01229-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99101
in Machine Vision and Applications > vol 32 n° 5 (September 2021) . - n° 105[article]Target-based automated matching of multiple terrestrial laser scans for complex forest scenes / Xuming Ge in ISPRS Journal of photogrammetry and remote sensing, vol 179 (September 2021)
[article]
Titre : Target-based automated matching of multiple terrestrial laser scans for complex forest scenes Type de document : Article/Communication Auteurs : Xuming Ge, Auteur ; Qing Zhu, Auteur Année de publication : 2021 Article en page(s) : pp 1 - 13 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] appariement de données localisées
[Termes IGN] biomasse aérienne
[Termes IGN] biomasse forestière
[Termes IGN] densité de la végétation
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] scène forestière
[Termes IGN] semis de pointsRésumé : (Auteur) Terrestrial laser scanners are widely used to derive unbiased and non-destructive estimates of the vertical distribution of the plant area index and plant area volume density at plot-level scales, as well as the above-ground biomass, height, and diameter at breast height of individual trees. Multiple scans are often employed to capture and register data so that all of the stems can be detected and their complete forms can be analyzed. Researchers have traditionally preferred target-less strategies to register scans because of their low cost and convenience. However, in complex forest scenes, even state-of-the-art approaches cannot guarantee the success of any pairwise registration. In this study, we present an automated target-based processing approach for the registration of unordered scans in complex forest scenes. In contrast to previous studies, the proposed registration method automatically detects the artificial targets and builds a geometric network to judge their connectivity. A pose graph is then exploited to combine these data with the corresponding pairwise transformation, and then the scans are integrated into a unified coordinate system. This method is more robust and efficient than target-less approaches because it is independent of the characteristics of individual trees and does not require ground information. In an experimental scenario, we use an extremely complex wild bamboo forest scene to evaluate the performance of the proposed approach in terms of robustness, accuracy, and efficiency. Numéro de notice : A2021-573 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.019 Date de publication en ligne : 15/07/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98173
in ISPRS Journal of photogrammetry and remote sensing > vol 179 (September 2021) . - pp 1 - 13[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021091 SL Revue Centre de documentation Revues en salle Disponible 081-2021093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Single annotated pixel based weakly supervised semantic segmentation under driving scenes / Xi Li in Pattern recognition, vol 116 (August 2021)
[article]
Titre : Single annotated pixel based weakly supervised semantic segmentation under driving scenes Type de document : Article/Communication Auteurs : Xi Li, Auteur ; Huimin Ma, Auteur ; Sheng Yi, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 107979 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Semantic segmentation tasks based on weakly supervised conditions have been put forward to achieve a lightweight labeling process. For simple images that only include a few categories, research based on image-level annotations has achieved acceptable performance. However, when facing complex scenes, since image contains a large number of classes, it becomes challenging to learn visual appearance based on image tags. In this case, image-level annotations are not useful in providing information. Therefore, we set up a new task in which a single annotated pixel is provided for each category in a whole dataset. Based on the more lightweight and informative condition, a three step process is built for pseudo labels generation, which progressively implements each class’ optimal feature representation, image inference, and context-location based refinement. In particular, since high-level semantics and low-level imaging features have different discriminative abilities for each class under driving scenes, we divide categories into “object” or “scene” and then provide different operations for the two types separately. Further, an alternate iterative structure is established to gradually improve segmentation performance, which combines CNN-based inter-image common semantic learning and imaging prior based intra-image modification process. Experiments on the Cityscapes dataset demonstrate that the proposed method provides a feasible way to solve weakly supervised semantic segmentation tasks under complex driving scenes. Numéro de notice : A2021-985 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.patcog.2021.107979 En ligne : https://doi.org/10.1016/j.patcog.2021.107979 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101354
in Pattern recognition > vol 116 (August 2021) . - n° 107979[article]Semantic-aware label placement for augmented reality in street view / Jianqing Jia in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Semantic-aware label placement for augmented reality in street view Type de document : Article/Communication Auteurs : Jianqing Jia, Auteur ; Semir Elezovikj, Auteur ; Heng Fan, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1805 - 1819 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] image Streetview
[Termes IGN] information sémantique
[Termes IGN] optimisation (mathématiques)
[Termes IGN] point d'intérêt
[Termes IGN] réalité augmentée
[Termes IGN] saillance
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In an augmented reality (AR) application, placing labels in a manner that is clear and readable without occluding the critical information from the real world can be a challenging problem. This paper introduces a label placement technique for AR used in street view scenarios. We propose a semantic-aware task-specific label placement method by identifying potentially important image regions through a novel feature map, which we refer to as guidance map. Given an input image, its saliency information, semantic information and the task-specific importance prior are integrated in the guidance map for our labeling task. To learn the task prior, we created a label placement dataset with the users’ labeling preferences, as well as use it for evaluation. Our solution encodes the constraints for placing labels in an optimization problem to obtain the final label layout, and the labels will be placed in appropriate positions to reduce the chances of overlaying important real-world objects in street view AR scenarios. The experimental validation shows clearly the benefits of our method over previous solutions in the AR street view navigation and similar applications. Numéro de notice : A2021-542 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01939-w Date de publication en ligne : 02/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01939-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98022
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1805 - 1819[article]Spatio-temporal-spectral observation model for urban remote sensing / Zhenfeng Shao in Geo-spatial Information Science, vol 24 n° 3 (July 2021)PermalinkTowards efficient indoor/outdoor registration using planar polygons / Rahima Djahel in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)PermalinkAn automatic workflow for orientation of historical images with large radiometric and geometric differences / Ferdinand Maiwald in Photogrammetric record, vol 36 n° 174 (June 2021)PermalinkScene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier / Qimin Cheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)Permalink3D change detection using adaptive thresholds based on local point cloud density / Dan Liu in ISPRS International journal of geo-information, vol 10 n° 3 (March 2021)PermalinkActivity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)PermalinkPermalink3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)PermalinkAssessment of sky diffuse irradiance and building reflected irradiance in cast shadows / Manchun Lei (2021)PermalinkCluttering reduction for interactive navigation and visualization of historical Images / Evelyn Paiz-Reyes (2021)Permalink