Descripteur
Documents disponibles dans cette catégorie (20)



Etendre la recherche sur niveau(x) vers le bas
Navigation network derivation for QR code-based indoor pedestrian path planning / Jinjin Yan in Transactions in GIS, vol 26 n° 3 (May 2022)
![]()
[article]
Titre : Navigation network derivation for QR code-based indoor pedestrian path planning Type de document : Article/Communication Auteurs : Jinjin Yan, Auteur ; Jinwoo Lee, Auteur ; Sisi Zlatanova, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1240 - 1255 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] batiment commercial
[Termes IGN] bâtiment public
[Termes IGN] navigation pédestre
[Termes IGN] noeud
[Termes IGN] point d'intérêt
[Termes IGN] positionnement en intérieur
[Termes IGN] QR code
[Termes IGN] scène intérieure
[Termes IGN] trajet (mobilité)Résumé : (auteur) With the development of cities, the indoor structures of contemporary public or commercial buildings are becoming increasingly complex. Accordingly, the need for indoor navigation has arisen. Among the indoor positioning technologies, quick response (QR) code, a low-cost, easily deployable, flexible, and efficient approach, has been used for indoor positioning and navigation purposes. A navigation network (model) is a precondition for pedestrian navigation path planning. However, no thorough research has been completed to investigate the relationship between navigation networks and locations of QR codes, which may cause ambiguities when deciding the closest node from the network that should be used for path computation. Specifically, QR codes are generally placed according to preferences or certain specifications whereas current agreed navigation network derivation approaches do not consider that. This article presents a navigation network derivation approach to address the issue by integrating QR code locations as nodes in navigation networks. The present approach is demonstrated in a shopping mall case. The results show that the approach can overcome the above-mentioned issue for indoor pedestrian path planning based on the QR code localization. Numéro de notice : A2022-476 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1111/tgis.12912 Date de publication en ligne : 10/04/2022 En ligne : https://doi.org/10.1111/tgis.12912 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100823
in Transactions in GIS > vol 26 n° 3 (May 2022) . - pp 1240 - 1255[article]GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
![]()
[article]
Titre : GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes Type de document : Article/Communication Auteurs : Linxi Huan, Auteur ; Xianwei Zheng, Auteur ; Jianya Gong, Auteur Année de publication : 2022 Article en page(s) : pp 301 - 314 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] géométrie
[Termes IGN] image RVB
[Termes IGN] maillage
[Termes IGN] modélisation sémantique
[Termes IGN] objet 3D
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) Semantic indoor 3D modeling with multi-task deep neural networks is an efficient and low-cost way for reconstructing an indoor scene with geometrically complete room structure and semantic 3D individuals. Challenged by the complexity and clutter of indoor scenarios, the semantic reconstruction quality of current methods is still limited by the insufficient exploration and learning of 3D geometry information. To this end, this paper proposes an end-to-end multi-task neural network for geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes (termed as GeoRec). In the proposed GeoRec, we build a geometry extractor that can effectively learn geometry-enhanced feature representation from depth data, to improve the estimation accuracy of layout, camera pose and 3D object bounding boxes. We also introduce a novel object mesh generator that strengthens the reconstruction robustness of GeoRec to indoor occlusion with geometry-enhanced implicit shape embedding. With the parsed scene semantics and geometries, the proposed GeoRec reconstructs an indoor scene by placing reconstructed object mesh models with 3D object detection results in the estimated layout cuboid. Extensive experiments conducted on two benchmark datasets show that the proposed GeoRec yields outstanding performance with mean chamfer distance error for object reconstruction on the challenging Pix3D dataset, 70.45% mAP for 3D object detection and 77.1% 3D mIoU for layout estimation on the commonly-used SUN RGB-D dataset. Especially, the mesh reconstruction sub-network of GeoRec trained on Pix3D can be directly transferred to SUN RGB-D without any fine-tuning, manifesting a high generalization ability. Numéro de notice : A2022-235 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2022.02.014 Date de publication en ligne : 03/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100139
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 301 - 314[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Structure-aware indoor scene reconstruction via two levels of abstraction / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
![]()
[article]
Titre : Structure-aware indoor scene reconstruction via two levels of abstraction Type de document : Article/Communication Auteurs : Hao Fang, Auteur ; Cihui Pan, Auteur ; Hui Huang, Auteur Année de publication : 2021 Article en page(s) : pp 155 - 170 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image optique
[Termes IGN] maillage
[Termes IGN] maille triangulaire
[Termes IGN] niveau d'abstraction
[Termes IGN] polygone
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) In this paper, we propose a novel approach that reconstructs the indoor scene in a structure-aware manner and produces two meshes with different levels of abstraction. To be precise, we start from the raw triangular mesh of indoor scene and decompose it into two parts: structure and non-structure objects. On the one hand, structure objects are defined as significant permanent parts in the indoor environment such as floors, ceilings and walls. In the proposed algorithm, structure objects are abstracted by planar primitives and assembled into a polygonal structure mesh. This step produces a compact structure-aware watertight model that decreases the complexity of original mesh by three orders of magnitude. On the other hand, non-structure objects are movable objects in the indoor environment such as furniture and interior decoration. Meshes of these objects are repaired and simplified according to their relationship with respect to structure primitives. Finally, the union of all the non-structure meshes and structure mesh comprises the scene mesh. Note that structure mesh and scene mesh preserve various levels of abstraction and can be used for different applications according to user preference. Our experiments on both LIDAR and RGBD data scanned from simple to large scale indoor scenes indicate that the proposed framework generates structure-aware results while being robust and scalable. It is also compared qualitatively and quantitatively against popular mesh approximation, floorplan generation and piecewise-planar surface reconstruction methods to demonstrate its performance. Numéro de notice : A2021-561 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.007 Date de publication en ligne : 23/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98119
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 155 - 170[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Towards efficient indoor/outdoor registration using planar polygons / Rahima Djahel in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)
![]()
[article]
Titre : Towards efficient indoor/outdoor registration using planar polygons Type de document : Article/Communication Auteurs : Rahima Djahel, Auteur ; Bruno Vallet , Auteur ; Pascal Monasse, Auteur
Année de publication : 2021 Projets : BIOM / Vallet, Bruno Article en page(s) : pp 51 - 58 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse de groupement
[Termes IGN] appariement de primitives
[Termes IGN] bati
[Termes IGN] détection de contours
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de points
[Termes IGN] géométrie euclidienne
[Termes IGN] polygone
[Termes IGN] scène intérieure
[Termes IGN] scène urbaine
[Termes IGN] superposition de donnéesRésumé : (auteur) The registration of indoor and outdoor scans with a precision reaching the level of geometric noise represents a major challenge for Indoor/Outdoor building modeling. The basic idea of the contribution presented in this paper consists in extracting planar polygons from indoor and outdoor LiDAR scans, and then matching them. In order to cope with the very small overlap between indoor and outdoor scans of the same building, we propose to start by extracting points lying in the buildings’ interior from the outdoor scans as points where the laser ray crosses detected façades. Since, within a building environment, most of the objects are bounded by a planar surface, we propose a new registration algorithm that matches planar polygons by clustering polygons according to their normal direction, then by their offset in the normal direction. We use this clustering to find possible polygon correspondences (hypotheses) and estimate the optimal transformation for each hypothesis. Finally, a quality criteria is computed for each hypothesis in order to select the best one. To demonstrate the accuracy of our algorithm, we tested it on real data with a static indoor acquisition and a dynamic (Mobile Laser Scanning) outdoor acquisition. Numéro de notice : A2021-490 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-2-2021-51-2021 Date de publication en ligne : 17/06/2021 En ligne : http://dx.doi.org/10.5194/isprs-annals-V-2-2021-51-2021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97955
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2021 (July 2021) . - pp 51 - 58[article]Visual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)
![]()
[article]
Titre : Visual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors Type de document : Article/Communication Auteurs : Longyu Zhang, Auteur ; Hao Xia, Auteur ; Qingjun Liu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 195 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par nuées dynamiques
[Termes IGN] estimation de pose
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] modélisation 3D
[Termes IGN] positionnement en intérieur
[Termes IGN] Ransac (algorithme)
[Termes IGN] scène intérieure
[Termes IGN] SIFT (algorithme)
[Termes IGN] SURF (algorithme)
[Termes IGN] téléphone intelligent
[Termes IGN] vision par ordinateurRésumé : (auteur) Positioning information has become one of the most important information for processing and displaying on smart mobile devices. In this paper, we propose a visual positioning method using RGB-D image on smart mobile devices. Firstly, the pose of each image in the training set is calculated through feature extraction and description, image registration, and pose map optimization. Then, in the image retrieval stage, the training set and the query set are clustered to generate the vector of local aggregated descriptors (VLAD) description vector. In order to overcome the problem that the description vector loses the image color information and improve the retrieval accuracy under different lighting conditions, the opponent color information and depth information are added to the description vector for retrieval. Finally, using the point cloud corresponding to the retrieval result image and its pose, the pose of the retrieved image is calculated by perspective-n-point (PnP) method. The results of indoor scene positioning under different illumination conditions show that the proposed method not only improves the positioning accuracy compared with the original VLAD and ORB-SLAM2, but also has high computational efficiency. Numéro de notice : A2021-481 Affiliation des auteurs : non IGN Thématique : IMAGERIE/POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10040195 Date de publication en ligne : 24/03/2021 En ligne : https://doi.org/10.3390/ijgi10040195 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97425
in ISPRS International journal of geo-information > vol 10 n° 4 (April 2021) . - n° 195[article]PermalinkPerception de scène par un système multi-capteurs, application à la navigation dans des environnements d'intérieur structuré / Marwa Chakroun (2021)
PermalinkIndoor point cloud segmentation using iterative Gaussian mapping and improved model fitting / Bufan Zhao in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
PermalinkComparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding / Weihua Dong in Cartography and Geographic Information Science, vol 47 n° 3 (May 2020)
PermalinkCartographie sémantique hybride de scènes urbaines à partir de données image et Lidar / Mohamed Boussaha (2020)
PermalinkPoint cloud registration and mitigation of refraction effects for geomonitoring using long-range terrestrial laser scanning / Ephraim Friedli (2020)
PermalinkPyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
PermalinkPermalinkConfigurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
PermalinkGéo-référencement précis d'acquisition photogrammétrique de « longues » scènes d'intérieur / Truong Giang Nguyen (2018)
Permalink