Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2121)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
The design and testing of 3DmoveR: an experimental tool for usability studies of interactive 3D maps / Lukas Herman in Cartographic perspectives, n° 90 ([01/10/2018])
[article]
Titre : The design and testing of 3DmoveR: an experimental tool for usability studies of interactive 3D maps Type de document : Article/Communication Auteurs : Lukas Herman, Auteur ; Tomas Řezník, Auteur ; Zdenek Stachoň, Auteur ; Jan Russnák, Auteur Année de publication : 2018 Article en page(s) : pp 31 - 63 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] bibliothèque logicielle
[Termes IGN] carte en 3D
[Termes IGN] convivialité
[Termes IGN] Javascript (langage de script)
[Termes IGN] PHP
[Termes IGN] scène 3D
[Termes IGN] test de performance
[Vedettes matières IGN] GéovisualisationMots-clés libres : 3D Movement and Interaction Recorder (3DmoveR) Résumé : (auteur) Various widely available applications such as Google Earth have made interactive 3D visualizations of spatial data popular. While several studies have focused on how users perform when interacting with these with 3D visualizations, it has not been common to record their virtual movements in 3D environments or interactions with 3D maps. We therefore created and tested a new web-based research tool: a 3D Movement and Interaction Recorder (3DmoveR). Its design incorporates findings from the latest 3D visualization research, and is built upon an iterative requirements analysis. It is implemented using open web technologies such as PHP, JavaScript, and the X3DOM library. The main goal of the tool is to record camera position and orientation during a user’s movement within a virtual 3D scene, together with other aspects of their interaction. After building the tool, we performed an experiment to demonstrate its capabilities. This experiment revealed differences between laypersons and experts (cartographers) when working with interactive 3D maps. For example, experts achieved higher numbers of correct answers in some tasks, had shorter response times, followed shorter virtual trajectories, and moved through the environment more smoothly. Interaction-based clustering as well as other ways of visualizing and qualitatively analyzing user interaction were explored. Numéro de notice : A2018-610 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.14714/CP90.1411 Date de publication en ligne : 09/09/2018 En ligne : https://doi.org/10.14714/CP90.1411 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92837
in Cartographic perspectives > n° 90 [01/10/2018] . - pp 31 - 63[article]Augmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Augmented reality meets computer vision : efficient data generation for urban driving scenes Type de document : Article/Communication Auteurs : Hassan Abu Alhaija, Auteur ; Siva Karthik Mustikovela, Auteur ; Lars Mescheder, Auteur ; Andreas Geiger, Auteur ; Carsten Rother, Auteur Année de publication : 2018 Article en page(s) : pp 961 - 972 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] détection d'objet
[Termes IGN] réalité augmentée
[Termes IGN] scène urbaine
[Termes IGN] vision par ordinateurRésumé : (Auteur) The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data. Numéro de notice : A2018-417 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1070-x Date de publication en ligne : 07/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1070-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90900
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 961 - 972[article]La cartographie mobile et le géoréférencement précis de réseaux souterrains / Garance Weller in XYZ, n° 156 (septembre - novembre 2018)
[article]
Titre : La cartographie mobile et le géoréférencement précis de réseaux souterrains Type de document : Article/Communication Auteurs : Garance Weller, Auteur ; Quentin Dartiailh, Auteur Année de publication : 2018 Article en page(s) : pp 17 - 20 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Applications SIG
[Termes IGN] caméra numérique
[Termes IGN] canalisation
[Termes IGN] image optique
[Termes IGN] réseau technique souterrain
[Termes IGN] système d'information géographique
[Termes IGN] système de numérisation mobileRésumé : (auteur) Depuis une quinzaine d'années, les systèmes de cartographie mobile terrestre se développent, offrant des moyens de collecte d'information géolocalisée à grande échelle et de haute précision. Aujourd'hui de nombreuses solutions existent, la plupart intégrant différents capteurs ou solutions considérés comme complémentaires : lidar et caméras optiques, systèmes de positionnement globaux par satellite, multi-constellations, et stations de base terrestres mobiles ou fixes de forte densité, soit directement par l'utilisateur final, soit via des prestations de service. Nous présentons ici et évaluons l'utilisation d'une solution intégrée de cartographie mobile basée sur de l'imagerie optique seule pour les relevés terrain et d'un système d'information géographique associé pour le repérage et la cartographie en 3D des réseaux linéaires enterrés dans le cadre de la réforme anti-endommagement des réseaux sensibles. Numéro de notice : A2018-393 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90827
in XYZ > n° 156 (septembre - novembre 2018) . - pp 17 - 20[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2018031 RAB Revue Centre de documentation En réserve L003 Disponible Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars Type de document : Article/Communication Auteurs : Chenfanfu Jiang, Auteur ; Shuyao Qi, Auteur ; Yixin Zhu, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 920 - 941 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] compréhension de l'image
[Termes IGN] image RVB
[Termes IGN] rendu réaliste
[Termes IGN] scène intérieure
[Termes IGN] segmentation sémantique
[Termes IGN] synthèse d'imageRésumé : (Auteur) We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks—depth and surface normal prediction, semantic segmentation, reconstruction, etc.—and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner. Numéro de notice : A2018-416 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1103-5 Date de publication en ligne : 30/06/2018 En ligne : https://doi.org/10.1007/s11263-018-1103-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90899
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 920 - 941[article]Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning / Rui Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 143 (September 2018)
[article]
Titre : Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning Type de document : Article/Communication Auteurs : Rui Zhang, Auteur ; Guangyun Li, Auteur ; Minglei Li, Auteur ; Li Wang, Auteur Année de publication : 2018 Article en page(s) : pp 85 - 96 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] fusion de données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scène 3D
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) We address the issue of the semantic segmentation of large-scale 3D scenes by fusing 2D images and 3D point clouds. First, a Deeplab-Vgg16 based Large-Scale and High-Resolution model (DVLSHR) based on deep Visual Geometry Group (VGG16) is successfully created and fine-tuned by training seven deep convolutional neural networks with four benchmark datasets. On the val set in CityScapes, DVLSHR achieves a 74.98% mean Pixel Accuracy (mPA) and a 64.17% mean Intersection over Union (mIoU), and can be adapted to segment the captured images (image resolution 2832 ∗ 4256 pixels). Second, the preliminary segmentation results with 2D images are mapped to 3D point clouds according to the coordinate relationships between the images and the point clouds. Third, based on the mapping results, fine features of buildings are further extracted directly from the 3D point clouds. Our experiments show that the proposed fusion method can segment local and global features efficiently and effectively. Numéro de notice : A2018-356 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.022 Date de publication en ligne : 11/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90590
in ISPRS Journal of photogrammetry and remote sensing > vol 143 (September 2018) . - pp 85 - 96[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018091 RAB Livre Centre de documentation En réserve L003 Disponible 081-2018093 DEP-EXM Livre LASTIG Dépôt en unité Exclu du prêt 081-2018092 DEP-EAF Livre Nancy Dépôt en unité Exclu du prêt 3-D deep learning approach for remote sensing image classification / Amina Ben Hamida in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)PermalinkComparison of high-density LiDAR and satellite photogrammetry for forest inventory / Grant D. Pearse in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkA deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification / Zhen Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)PermalinkDetecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques / Chinsu Lin in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkICARE-VEG: A 3D physics-based atmospheric correction method for tree shadows in urban areas / Karine R.M. Adeline in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkRobust detection and affine rectification of planar homogeneous texture for scene understanding / Shahzor Ahmad in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkSpectral-spatial classification of hyperspectral images using wavelet transform and hidden Markov random fields / Elham Kordi Ghasrodashti in Geocarto international, vol 33 n° 8 (August 2018)PermalinkEvolutionary approach for detection of buried remains using hyperspectral images / Leon Dozal in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 7 (juillet 2018)PermalinkExploring geo-tagged photos for land cover validation with deep learning / Hanfa Xing in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkExtracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)Permalink