Descripteur
Termes IGN > sciences naturelles > physique > électronique > composant électronique > processeur > architecture pipeline (processeur)
architecture pipeline (processeur) |
Documents disponibles dans cette catégorie (7)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Rasterisation-based progressive photon mapping / Iordanis Evangelou in The Visual Computer, vol 36 n° 10 - 12 (October 2020)
[article]
Titre : Rasterisation-based progressive photon mapping Type de document : Article/Communication Auteurs : Iordanis Evangelou, Auteur ; Georgios Papaioannou, Auteur ; Konstantinos Vardis, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 1993 - 2004 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] cartographie
[Termes IGN] implémentation (informatique)
[Termes IGN] lancer de rayons
[Termes IGN] photon
[Termes IGN] processeur graphique
[Termes IGN] rastérisationRésumé : (auteur) Ray tracing on the GPU has been synergistically operating alongside rasterisation in interactive rendering engines for some time now, in order to accurately capture certain illumination effects. In the same spirit, in this paper, we propose an implementation of progressive photon mapping entirely on the rasterisation pipeline, which is agnostic to the specific GPU architecture, in order to synthesise images at interactive rates. While any GPU ray tracing architecture can be used for photon mapping, performing ray traversal in image space minimises acceleration data structure construction time and supports arbitrarily complex and fully dynamic geometry. Furthermore, this strategy maximises data structure reuse by encompassing rasterisation, ray tracing and photon gathering tasks in a single data structure. Both eye and light paths of arbitrary depth are traced on multi-view deep G-buffers, and photon flux is gathered by a properly adapted multi-view photon splatting. In contrast to previous methods exploiting rasterisation to some extent, due to our novel indirect photon splatting approach, any event combination present in photon mapping is captured. We evaluate our method using typical test scenes and scenarios for photon mapping methods and show how our approach outperforms typical GPU-based progressive photon mapping. Numéro de notice : A2020-412 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1007/s00371-020-01897-3 Date de publication en ligne : 14/07/2020 En ligne : https://doi.org/10.1007/s00371-020-01897-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95935
in The Visual Computer > vol 36 n° 10 - 12 (October 2020) . - pp 1993 - 2004[article]Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars Type de document : Article/Communication Auteurs : Chenfanfu Jiang, Auteur ; Shuyao Qi, Auteur ; Yixin Zhu, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 920 - 941 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] compréhension de l'image
[Termes IGN] image RVB
[Termes IGN] rendu réaliste
[Termes IGN] scène intérieure
[Termes IGN] segmentation sémantique
[Termes IGN] synthèse d'imageRésumé : (Auteur) We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks—depth and surface normal prediction, semantic segmentation, reconstruction, etc.—and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner. Numéro de notice : A2018-416 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1103-5 Date de publication en ligne : 30/06/2018 En ligne : https://doi.org/10.1007/s11263-018-1103-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90899
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 920 - 941[article]Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans / Mohamed Boussaha in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2 (June 2018)
[article]
Titre : Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans Type de document : Article/Communication Auteurs : Mohamed Boussaha , Auteur ; Bruno Vallet , Auteur ; Patrick Rives, Auteur Année de publication : 2018 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ISPRS 2018, TC II Mid-term Symposium, Towards Photogrammetry 2020 04/06/2018 07/06/2018 Riva del Garda Italie ISPRS OA Annals Article en page(s) : pp 49 - 56 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] chaîne de traitement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] grande échelle
[Termes IGN] maillage
[Termes IGN] orthoimage
[Termes IGN] reconstruction d'objet
[Termes IGN] Rouen
[Termes IGN] semis de points
[Termes IGN] texture d'imageRésumé : (auteur) The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer. Numéro de notice : A2018-329 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-IV-2-49-2018 Date de publication en ligne : 28/05/2018 En ligne : http://dx.doi.org/10.5194/isprs-annals-IV-2-49-2018 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90471
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol IV-2 (June 2018) . - pp 49 - 56[article]From Google Maps to a fine-grained catalog of street trees / Steve Branson in ISPRS Journal of photogrammetry and remote sensing, vol 135 (January 2018)
[article]
Titre : From Google Maps to a fine-grained catalog of street trees Type de document : Article/Communication Auteurs : Steve Branson, Auteur ; Jan Dirk Wegner, Auteur ; David Hall, Auteur ; Nico Lang, Auteur ; Konrad Schindler, Auteur ; Pietro Perona, Auteur Année de publication : 2018 Article en page(s) : pp 13 - 30 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] arbre urbain
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] classification dirigée
[Termes IGN] détection de changement
[Termes IGN] Google Maps
[Termes IGN] inventaire de la végétation
[Termes IGN] photo-interprétation assistée par ordinateur
[Termes IGN] réseau neuronal convolutif
[Termes IGN] villeRésumé : (Auteur) Up-to-date catalogs of the urban tree population are of importance for municipalities to monitor and improve quality of life in cities. Despite much research on automation of tree mapping, mainly relying on dedicated airborne LiDAR or hyperspectral campaigns, tree detection and species recognition is still mostly done manually in practice. We present a fully automated tree detection and species recognition pipeline that can process thousands of trees within a few hours using publicly available aerial and street view images of Google MapsTM. These data provide rich information from different viewpoints and at different scales from global tree shapes to bark textures. Our work-flow is built around a supervised classification that automatically learns the most discriminative features from thousands of trees and corresponding, publicly available tree inventory data. In addition, we introduce a change tracker that recognizes changes of individual trees at city-scale, which is essential to keep an urban tree inventory up-to-date. The system takes street-level images of the same tree location at two different times and classifies the type of change (e.g., tree has been removed). Drawing on recent advances in computer vision and machine learning, we apply convolutional neural networks (CNN) for all classification tasks. We propose the following pipeline: download all available panoramas and overhead images of an area of interest, detect trees per image and combine multi-view detections in a probabilistic framework, adding prior knowledge; recognize fine-grained species of detected trees. In a later, separate module, track trees over time, detect significant changes and classify the type of change. We believe this is the first work to exploit publicly available image data for city-scale street tree detection, species recognition and change tracking, exhaustively over several square kilometers, respectively many thousands of trees. Experiments in the city of Pasadena, California, USA show that we can detect >70% of the street trees, assign correct species to >80% for 40 different species, and correctly detect and classify changes in >90% of the cases. Numéro de notice : A2018-068 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.008 Date de publication en ligne : 20/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89426
in ISPRS Journal of photogrammetry and remote sensing > vol 135 (January 2018) . - pp 13 - 30[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018012 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt 081-2018013 DEP-EXM Revue Saint-Mandé Dépôt en unité Exclu du prêt Extracting mobile objects in images using a Velodyne lidar point cloud / Bruno Vallet in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol II-3 W4 (March 2015)
[article]
Titre : Extracting mobile objects in images using a Velodyne lidar point cloud Type de document : Article/Communication Auteurs : Bruno Vallet , Auteur ; Wen Xiao, Auteur ; Mathieu Brédif , Auteur Année de publication : 2015 Conférence : ISPRS 2015, PIA 2015 - HRIGI 2015 Joint ISPRS conference 25/03/2015 27/03/2015 Munich Allemagne ISPRS OA Annals Article en page(s) : pp 247 - 253 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme Graph-Cut
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] classification de Dempster-Shafer
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image terrestre
[Termes IGN] objet mobile
[Termes IGN] semis de points
[Termes IGN] théorie de Dempster-ShaferRésumé : (auteur) This paper presents a full pipeline to extract mobile objects in images based on a simultaneous laser acquisition with a Velodyne scanner. The point cloud is first analysed to extract mobile objects in 3D. This is done using Dempster-Shafer theory and it results in weights telling for each points if it corresponds to a mobile object, a fixed object or if no decision can be made based on the data (unknown). These weights are projected in an image acquired simultaneously and used to segment the image between the mobile and the static part of the scene. Numéro de notice : A2015-757 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprsannals-II-3-W4-247-2015 Date de publication en ligne : 11/03/2015 En ligne : http://dx.doi.org/10.5194/isprsannals-II-3-W4-247-2015 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=78753
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol II-3 W4 (March 2015) . - pp 247 - 253[article]Provenance capture and use in a satellite data processing pipeline / Scott Jensen in IEEE Transactions on geoscience and remote sensing, vol 51 n° 11 (November 2013)PermalinkOne billion points in the cloud – an octree for efficient processing of 3D laser scans / Jan Elseberg in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)Permalink