Descripteur



Etendre la recherche sur niveau(x) vers le bas
Rasterisation-based progressive photon mapping / Iordanis Evangelou in The Visual Computer, vol 36 n° 10 - 12 (October 2020)
![]()
[article]
Titre : Rasterisation-based progressive photon mapping Type de document : Article/Communication Auteurs : Iordanis Evangelou, Auteur ; Georgios Papaioannou, Auteur ; Konstantinos Vardis, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 1993 - 2004 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes descripteurs IGN] architecture pipeline
[Termes descripteurs IGN] cartographie
[Termes descripteurs IGN] implémentation (informatique)
[Termes descripteurs IGN] lancer de rayons
[Termes descripteurs IGN] photon
[Termes descripteurs IGN] processeur graphique
[Termes descripteurs IGN] rastérisationRésumé : (auteur) Ray tracing on the GPU has been synergistically operating alongside rasterisation in interactive rendering engines for some time now, in order to accurately capture certain illumination effects. In the same spirit, in this paper, we propose an implementation of progressive photon mapping entirely on the rasterisation pipeline, which is agnostic to the specific GPU architecture, in order to synthesise images at interactive rates. While any GPU ray tracing architecture can be used for photon mapping, performing ray traversal in image space minimises acceleration data structure construction time and supports arbitrarily complex and fully dynamic geometry. Furthermore, this strategy maximises data structure reuse by encompassing rasterisation, ray tracing and photon gathering tasks in a single data structure. Both eye and light paths of arbitrary depth are traced on multi-view deep G-buffers, and photon flux is gathered by a properly adapted multi-view photon splatting. In contrast to previous methods exploiting rasterisation to some extent, due to our novel indirect photon splatting approach, any event combination present in photon mapping is captured. We evaluate our method using typical test scenes and scenarios for photon mapping methods and show how our approach outperforms typical GPU-based progressive photon mapping. Numéro de notice : A2020-412 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1007/s00371-020-01897-3 date de publication en ligne : 14/07/2020 En ligne : https://doi.org/10.1007/s00371-020-01897-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95935
in The Visual Computer > vol 36 n° 10 - 12 (October 2020) . - pp 1993 - 2004[article]Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
![]()
[article]
Titre : Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars Type de document : Article/Communication Auteurs : Chenfanfu Jiang, Auteur ; Shuyao Qi, Auteur ; Yixin Zhu, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 920 - 941 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage automatique
[Termes descripteurs IGN] architecture pipeline
[Termes descripteurs IGN] compréhension de l'image
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] reconstruction automatique
[Termes descripteurs IGN] rendu réaliste
[Termes descripteurs IGN] scène intérieure
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] synthèse d'imageRésumé : (Auteur) We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks—depth and surface normal prediction, semantic segmentation, reconstruction, etc.—and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner. Numéro de notice : A2018-416 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1103-5 date de publication en ligne : 30/06/2018 En ligne : https://doi.org/10.1007/s11263-018-1103-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90899
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 920 - 941[article]Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans / Mohamed Boussaha in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2 ([28/05/2018])
![]()
[article]
Titre : Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans Type de document : Article/Communication Auteurs : Mohamed Boussaha , Auteur ; Bruno Vallet
, Auteur ; Patrick Rives, Auteur
Année de publication : 2018 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ISPRS 2018, TC II Mid-term Symposium "Towards Photogrammetry 2020" 04/06/2018 07/06/2018 Riva del Garda Italie Open access Annals Article en page(s) : pp 49 - 56 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes descripteurs IGN] architecture pipeline
[Termes descripteurs IGN] chaîne de traitement
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] grande échelle
[Termes descripteurs IGN] maillage
[Termes descripteurs IGN] orthoimage
[Termes descripteurs IGN] reconstruction d'objet
[Termes descripteurs IGN] Rouen
[Termes descripteurs IGN] texture d'imageRésumé : (auteur) The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer. Numéro de notice : A2018-329 Affiliation des auteurs : LaSTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-IV-2-49-2018 date de publication en ligne : 28/05/2018 En ligne : http://dx.doi.org/10.5194/isprs-annals-IV-2-49-2018 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90471
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > IV-2 [28/05/2018] . - pp 49 - 56[article]From Google Maps to a fine-grained catalog of street trees / Steve Branson in ISPRS Journal of photogrammetry and remote sensing, vol 135 (January 2018)
![]()
[article]
Titre : From Google Maps to a fine-grained catalog of street trees Type de document : Article/Communication Auteurs : Steve Branson, Auteur ; Jan Dirk Wegner, Auteur ; David Hall, Auteur ; Nico Lang, Auteur ; Konrad Schindler, Auteur ; Pietro Perona, Auteur Année de publication : 2018 Article en page(s) : pp 13 - 30 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] arbre urbain
[Termes descripteurs IGN] architecture pipeline
[Termes descripteurs IGN] classification dirigée
[Termes descripteurs IGN] détection de changement
[Termes descripteurs IGN] Google Maps
[Termes descripteurs IGN] inventaire de la végétation
[Termes descripteurs IGN] Pasadena
[Termes descripteurs IGN] photo-interprétation assistée par ordinateur
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] villeRésumé : (Auteur) Up-to-date catalogs of the urban tree population are of importance for municipalities to monitor and improve quality of life in cities. Despite much research on automation of tree mapping, mainly relying on dedicated airborne LiDAR or hyperspectral campaigns, tree detection and species recognition is still mostly done manually in practice. We present a fully automated tree detection and species recognition pipeline that can process thousands of trees within a few hours using publicly available aerial and street view images of Google MapsTM. These data provide rich information from different viewpoints and at different scales from global tree shapes to bark textures. Our work-flow is built around a supervised classification that automatically learns the most discriminative features from thousands of trees and corresponding, publicly available tree inventory data. In addition, we introduce a change tracker that recognizes changes of individual trees at city-scale, which is essential to keep an urban tree inventory up-to-date. The system takes street-level images of the same tree location at two different times and classifies the type of change (e.g., tree has been removed). Drawing on recent advances in computer vision and machine learning, we apply convolutional neural networks (CNN) for all classification tasks. We propose the following pipeline: download all available panoramas and overhead images of an area of interest, detect trees per image and combine multi-view detections in a probabilistic framework, adding prior knowledge; recognize fine-grained species of detected trees. In a later, separate module, track trees over time, detect significant changes and classify the type of change. We believe this is the first work to exploit publicly available image data for city-scale street tree detection, species recognition and change tracking, exhaustively over several square kilometers, respectively many thousands of trees. Experiments in the city of Pasadena, California, USA show that we can detect >70% of the street trees, assign correct species to >80% for 40 different species, and correctly detect and classify changes in >90% of the cases. Numéro de notice : A2018-068 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.008 date de publication en ligne : 20/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89426
in ISPRS Journal of photogrammetry and remote sensing > vol 135 (January 2018) . - pp 13 - 30[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018011 RAB Revue Centre de documentation En réserve 3L Disponible 081-2018012 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt 081-2018013 DEP-EXM Revue Saint-Mandé Dépôt en unité Exclu du prêt Fully automatic analysis of archival aerial images : Current status and challenges / Sébastien Giordano (2017)
![]()
Titre : Fully automatic analysis of archival aerial images : Current status and challenges Type de document : Article/Communication Auteurs : Sébastien Giordano , Auteur ; Arnaud Le Bris
, Auteur ; Clément Mallet
, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2017 Projets : 1-Pas de projet / Conférence : JURSE 2017, Joint urban remote sensing event 06/03/2017 08/03/2017 Dubai Emirats Arabes Unis Proceedings IEEE Importance : 4 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] analyse d'image numérique
[Termes descripteurs IGN] architecture pipeline
[Termes descripteurs IGN] automatisation des processus
[Termes descripteurs IGN] modèle numérique de surface
[Termes descripteurs IGN] orthophotoplan numérique
[Termes descripteurs IGN] traitement d'imageRésumé : (auteur) Archival aerial images are a unique and relatively unexplored means to generate detailed land-cover information in 3D over the past 100 years. Many long-term environmental monitoring studies can be based on this type of image series. Such data provide a relatively dense temporal sampling of the territories with very high spatial resolution. Furthermore, photogrammetric workflows exist in order to both produce orthoimages and Digital Surface Models, with reasonable interactive actions. However, today, there is no fully automatic pipeline for generating such kind of data. This paper presents the main avenues of research in order to develop such workflow, starting from registration and radiometric issues up to land-cover classification challenges. Numéro de notice : C2017-030 Affiliation des auteurs : LaSTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/JURSE.2017.7924620 date de publication en ligne : 11/05/2017 En ligne : https://doi.org/10.1109/JURSE.2017.7924620 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89294 Extracting mobile objects in images using a Velodyne lidar point cloud / Bruno Vallet in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, II-3 W4 (March 2015)
PermalinkPermalinkProvenance capture and use in a satellite data processing pipeline / Scott Jensen in IEEE Transactions on geoscience and remote sensing, vol 51 n° 11 (November 2013)
PermalinkOne billion points in the cloud – an octree for efficient processing of 3D laser scans / Jan Elseberg in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
Permalink