Descripteur
Documents disponibles dans cette catégorie (28)



Etendre la recherche sur niveau(x) vers le bas
Railway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
![]()
[article]
Titre : Railway lidar semantic segmentation with axially symmetrical convolutional learning Type de document : Article/Communication Auteurs : Antoine Manier, Auteur ; Julien Moras, Auteur ; Jean-Christophe Michelin , Auteur ; Hélène Piet-Lahanier, Auteur
Année de publication : 2022 Article en page(s) : pp 135 - 142 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] scène 3D
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voie ferréeRésumé : (auteur) This paper presents a new deep-learning-based method for 3D Point Cloud Semantic Segmentation specifically designed for processing real-world LIDAR railway scenes. The new approach relies on the use of spatial local point cloud transformations for convolutional learning. These transformations allow an increased robustness to varying point cloud densities while preserving metric information and a sufficient descriptive ability. The resulting performances are illustrated with results on railway data from two distinct LIDAR point cloud datasets acquired in industrial settings. The quality of the extraction of useful information for maintenance operations and topological analysis is pointed together with a noticeable robustness to point cloud variations in distribution and point redundancy. Numéro de notice : A2022-433 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.5194/isprs-annals-V-2-2022-135-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2022-135-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100739
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2022 (2022 edition) . - pp 135 - 142[article]Virtual laser scanning of dynamic scenes created from real 4D topographic point cloud data / Lukas Winiwarter in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
![]()
[article]
Titre : Virtual laser scanning of dynamic scenes created from real 4D topographic point cloud data Type de document : Article/Communication Auteurs : Lukas Winiwarter, Auteur ; Katharina Anders, Auteur ; Daniel Schröder, Auteur ; Bernhard Höfle, Auteur Année de publication : 2022 Article en page(s) : pp 79 - 86 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] détection de changement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] filtre de Kalman
[Termes IGN] modèle de simulation
[Termes IGN] scène 3D
[Termes IGN] scène virtuelle
[Termes IGN] semis de points
[Termes IGN] Tyrol (Autriche)Résumé : (autuer) Virtual laser scanning (VLS) allows the generation of realistic point cloud data at a fraction of the costs required for real acquisitions. It also allows carrying out experiments that would not be feasible or even impossible in the real world, e.g., due to time constraints or when hardware does not exist. A critical part of a simulation is an adequate substitution of reality. In the case of VLS, this concerns the scanner, the laser-object interaction, and the scene. In this contribution, we present a method to recreate a realistic dynamic scene, where the surface changes over time. We first apply change detection and quantification on a real dataset of an erosion-affected high-mountain slope in Tyrol, Austria, acquired with permanent terrestrial laser scanning (TLS). Then, we model and extract the time series of a single change form, and transfer it to a virtual model scene. The benefit of such a transfer is that no physical modelling of the change processes is required. In our example, we use a Kalman filter with subsequent clustering to extract a set of erosion rills from a time series of high-resolution TLS data. The change magnitudes quantified at the locations of these rills are then transferred to a triangular mesh, representing the virtual scene. Subsequently, we apply VLS to investigate the detectability of such erosion rills from airborne laser scanning at multiple subsequent points in time. This enables us to test if, e.g., a certain flying altitude is appropriate in a disaster response setting for the detection of areas exposed to immediate danger. To ensure a successful transfer, the spatial resolution and the accuracy of the input dataset are much higher than the accuracy and resolution that are being simulated. Furthermore, the investigated change form is detected as significant in the input data. We, therefore, conclude the model of the dynamic scene derived from real TLS data to be an appropriate substitution for reality. Numéro de notice : A2022-437 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.5194/isprs-annals-V-2-2022-79-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2022-79-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100746
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2022 (2022 edition) . - pp 79 - 86[article]Metaheuristics for the positioning of 3D objects based on image analysis of complementary 2D photographs / Arnaud Flori in Machine Vision and Applications, vol 32 n° 5 (September 2021)
![]()
[article]
Titre : Metaheuristics for the positioning of 3D objects based on image analysis of complementary 2D photographs Type de document : Article/Communication Auteurs : Arnaud Flori, Auteur ; Hamouche Oulhadj, Auteur ; Patrick Siarry, Auteur Année de publication : 2021 Article en page(s) : n° 105 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme du recuit simulé
[Termes IGN] analyse d'image orientée objet
[Termes IGN] contour
[Termes IGN] image 2D
[Termes IGN] modélisation 3D
[Termes IGN] optimisation par essaim de particules
[Termes IGN] scène 3D
[Termes IGN] triangulationRésumé : (auteur) Today, advances in 3D modeling make it possible to identically reproduce objects, animals, humans and even entire scenes. The broad applications concern video games, virtual reality or augmented reality and cinema, for example. In this article, we propose a new method to build a 3D scene directly from several complementary photographs. The positions of the objects for which we already have a 3D model will be determined by triangulation, thanks to the information extracted from the photographs, such as the outline of the objects on the images. Each pixel of the images is converted into a value that gives its distance to the nearest outline. The 3D model of the objects is then projected on the converted images, and the triangulation is done using a cost function that gives the distance of each projection of the objects to their respective outlines. A projection is considered perfect when its distance to its outlines is null, which means that the cost function gives a score of zero as well. We propose to solve this optimization problem by means of two algorithms, namely Simulated Annealing (SA) and quantum particle swarm optimization (QUAPSO). Numéro de notice : A2021-868 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01229-y Date de publication en ligne : 03/08/2021 En ligne : https://doi.org/10.1007/s00138-021-01229-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99101
in Machine Vision and Applications > vol 32 n° 5 (September 2021) . - n° 105[article]Activity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)
![]()
[article]
Titre : Activity recognition in residential spaces with Internet of things devices and thermal imaging Type de document : Article/Communication Auteurs : Kshirasagar Naik, Auteur ; Tejas Pandit, Auteur ; Nitin Naik, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 988 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] compréhension de l'image
[Termes IGN] contrôle par télédétection
[Termes IGN] détection d'événement
[Termes IGN] espace intérieur
[Termes IGN] image RVB
[Termes IGN] image thermique
[Termes IGN] intelligence artificielle
[Termes IGN] internet des objets
[Termes IGN] itération
[Termes IGN] modèle stéréoscopique
[Termes IGN] objet mobile
[Termes IGN] reconnaissance automatique
[Termes IGN] reconnaissance d'objets
[Termes IGN] scène 3DRésumé : (auteur) In this paper, we design algorithms for indoor activity recognition and 3D thermal model generation using thermal images, RGB images, captured from external sensors, and the internet of things setup. Indoor activity recognition deals with two sub-problems: Human activity and household activity recognition. Household activity recognition includes the recognition of electrical appliances and their heat radiation with the help of thermal images. A FLIR ONE PRO camera is used to capture RGB-thermal image pairs for a scene. Duration and pattern of activities are also determined using an iterative algorithm, to explore kitchen safety situations. For more accurate monitoring of hazardous events such as stove gas leakage, a 3D reconstruction approach is proposed to determine the temperature of all points in the 3D space of a scene. The 3D thermal model is obtained using the stereo RGB and thermal images for a particular scene. Accurate results are observed for activity detection, and a significant improvement in the temperature estimation is recorded in the 3D thermal model compared to the 2D thermal image. Results from this research can find applications in home automation, heat automation in smart homes, and energy management in residential spaces. Numéro de notice : A2021-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21030988 Date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21030988 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97075
in Sensors > vol 21 n° 3 (February 2021) . - n° 988[article]
Titre : 3D point cloud compression Type de document : Thèse/HDR Auteurs : Chao Cao, Auteur ; Titus Zaharia, Directeur de thèse ; Marius Preda, Directeur de thèse Editeur : Paris : Institut Polytechnique de Paris Année de publication : 2021 Importance : 165 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse de doctorat de l’Institut polytechnique de Paris, Spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compression d'image
[Termes IGN] corrélation automatique de points homologues
[Termes IGN] couleur (variable spectrale)
[Termes IGN] état de l'art
[Termes IGN] objet 3D
[Termes IGN] précision géométrique (imagerie)
[Termes IGN] scène 3D
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motionIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) With the rapid growth of multimedia content, 3D objects are becoming more and more popular. Most of the time, they are modeled as complex polygonal meshes or dense point clouds, providing immersive experiences in different industrial and consumer multimedia applications. The point cloud, which is easier to acquire than mesh and is widely applicable, has raised many interests in both the academic and commercial worlds.A point cloud is a set of points with different properties such as their geometrical locations and the associated attributes (e.g., color, material properties, etc.). The number of the points within a point cloud can range from a thousand, to constitute simple 3D objects, up to billions, to realistically represent complex 3D scenes. Such huge amounts of data bring great technological challenges in terms of transmission, processing, and storage of point clouds.In recent years, numerous research works focused their efforts on the compression of meshes, while less was addressed for point clouds. We have identified two main approaches in the literature: a purely geometric one based on octree decomposition, and a hybrid one based on both geometry and video coding. The first approach can provide accurate 3D geometry information but contains weak temporal consistency. The second one can efficiently remove the temporal redundancy yet a decrease of geometrical precision can be observed after the projection. Thus, the tradeoff between compression efficiency and accurate prediction needs to be optimized.We focused on exploring the temporal correlations between dynamic dense point clouds. We proposed different approaches to improve the compression performance of the MPEG (Moving Picture Experts Group) V-PCC (Video-based Point Cloud Compression) test model, which provides state-of-the-art compression on dynamic dense point clouds.First, an octree-based adaptive segmentation is proposed to cluster the points with different motion amplitudes into 3D cubes. Then, motion estimation is applied to these cubes using affine transformation. Gains in terms of rate-distortion (RD) performance have been observed in sequences with relatively low motion amplitudes. However, the cost of building an octree for the dense point cloud remains expensive while the resulting octree structures contain poor temporal consistency for the sequences with higher motion amplitudes.An anatomical structure is then proposed to model the motion of the point clouds representing humanoids more inherently. With the help of 2D pose estimation tools, the motion is estimated from 14 anatomical segments using affine transformation.Moreover, we propose a novel solution for color prediction and discuss the residual coding from prediction. It is shown that instead of encoding redundant texture information, it is more valuable to code the residuals, which leads to a better RD performance.Although our contributions have improved the performances of the V-PCC test models, the temporal compression of dynamic point clouds remains a highly challenging task. Due to the limitations of the current acquisition technology, the acquired point clouds can be noisy in both geometry and attribute domains, which makes it challenging to achieve accurate motion estimation. In future studies, the technologies used for 3D meshes may be exploited and adapted to provide temporal-consistent connectivity information between dynamic 3D point clouds. Note de contenu :
Chapter 1 - Introduction
1.1. Background and motivation
1.2. Outline of the thesis and contributions
Chapter 2 - 3D Point Cloud Compression: State of the art
2.1. The 3D PCC “Universe Map” for methods
2.2. 1D methods: geometry traversal
2.3. 2D methods: Projection and mapping onto 2D planar domains
2.4. 3D methods: Direct exploitation of 3D correlations
2.5. DL-based methods
2.6. 3D PCC: What is missing?
2.7. MPEG 3D PCC standards
Chapter 3 - Extended Study of MPEG V-PCC and G-PCC Approaches
3.1. V-PCC methodology
3.2. Experimental evaluation of V-PCC
3.3. G-PCC methodology
3.4. Experimental evaluation of G-PCC
3.5. Experiments on the V-PCC inter-coding mode
3.6. Conclusion
Chapter 4 - Octree-based RDO segmentation
4.1. Pipeline
4.2. RDO-based octree segmentation
4.3. Prediction modeS
4.4. Experimental results
4.5. Conclusion
Chapter 5 - Skeleton-based motion estimation and compensation
5.1. Introduction
5.2. 3D Skeleton Generation
5.3. Motion estimation and compression
5.4. Experimental results
5.5. Conclusion
Chapter 6 - Temporal prediction using anatomical segmentation
6.1. Introduction
6.2. A novel dynamic 3D point cloud dataset
6.3. Prediction structure
6.4. Improved anatomy segmentation
6.5. Experimental results
6.6. Conclusion
Chapter 7 - A novel color compression for point clouds using affine transformation
7.1. Introduction
7.2. The residuals from both geometry and color
7.3. The prediction structure
7.4. Compression of the color residuals
7.5. Experimental results
7.6. Conclusion
Chapter 8 - Conclusion and future work
8.1. Conclusion
8.2. Future workNuméro de notice : 26821 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : informatique : Paris : 2021 Organisme de stage : Telecom SudParis nature-HAL : Thèse DOI : sans Date de publication en ligne : 13/04/2022 En ligne : https://tel.archives-ouvertes.fr/tel-03524521/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100476 Cluttering reduction for interactive navigation and visualization of historical Images / Evelyn Paiz-Reyes (2021)
PermalinkPermalinkHolographic SAR tomography 3-D reconstruction based on iterative adaptive approach and generalized likelihood ratio test / Dong Feng in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
PermalinkPermalinkVisualization of 3D property data and assessment of the impact of rendering attributes / Stefan Seipel in Journal of Geovisualization and Spatial Analysis, vol 4 n° 2 (December 2020)
PermalinkGeometric distortion of historical images for 3D visualization / Evelyn Paiz-Reyes in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkCreation of inspirational Web Apps that demonstrate the functionalities offered by the ArcGIS API for JavaScript / Arthur Genet (2020)
PermalinkPermalinkA building label placement method for 3D visualizations based on candidate label evaluation and selection / Jiangfeng She in International journal of geographical information science IJGIS, vol 33 n° 10 (October 2019)
PermalinkPreparing the holoLens for user studies : an augmented reality interface for the spatial adjustment of holographic objects in 3D indoor environments / Julian Keil in KN, Journal of Cartography and Geographic Information, vol 69 n° 3 (September 2019)
Permalink