Descripteur
Documents disponibles dans cette catégorie (17)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Semantic-aware label placement for augmented reality in street view / Jianqing Jia in The Visual Computer, vol 37 n° 7 (July 2021)
[article]
Titre : Semantic-aware label placement for augmented reality in street view Type de document : Article/Communication Auteurs : Jianqing Jia, Auteur ; Semir Elezovikj, Auteur ; Heng Fan, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1805 - 1819 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] image Streetview
[Termes IGN] information sémantique
[Termes IGN] optimisation (mathématiques)
[Termes IGN] point d'intérêt
[Termes IGN] réalité augmentée
[Termes IGN] saillance
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In an augmented reality (AR) application, placing labels in a manner that is clear and readable without occluding the critical information from the real world can be a challenging problem. This paper introduces a label placement technique for AR used in street view scenarios. We propose a semantic-aware task-specific label placement method by identifying potentially important image regions through a novel feature map, which we refer to as guidance map. Given an input image, its saliency information, semantic information and the task-specific importance prior are integrated in the guidance map for our labeling task. To learn the task prior, we created a label placement dataset with the users’ labeling preferences, as well as use it for evaluation. Our solution encodes the constraints for placing labels in an optimization problem to obtain the final label layout, and the labels will be placed in appropriate positions to reduce the chances of overlaying important real-world objects in street view AR scenarios. The experimental validation shows clearly the benefits of our method over previous solutions in the AR street view navigation and similar applications. Numéro de notice : A2021-542 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01939-w Date de publication en ligne : 02/08/2020 En ligne : https://doi.org/10.1007/s00371-020-01939-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98022
in The Visual Computer > vol 37 n° 7 (July 2021) . - pp 1805 - 1819[article]Geometric computer vision: omnidirectional visual and remotely sensed data analysis / Pouria Babahajiani (2021)
Titre : Geometric computer vision: omnidirectional visual and remotely sensed data analysis Type de document : Thèse/HDR Auteurs : Pouria Babahajiani, Auteur ; Moncef Gabbouj, Directeur de thèse Editeur : Tampere [Finlande] : Tampere University Année de publication : 2021 Importance : 147 p. Format : 21 x 30 cm ISBN/ISSN/EAN : 978-952-03-1979-3 Note générale : bibliographie
Accademic Dissertation, Tampere University, Faculty of Information Technology and Communication Sciences FinlandLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] chaîne de traitement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] effet de profondeur cinétique
[Termes IGN] espace public
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image panoramique
[Termes IGN] image Streetview
[Termes IGN] image terrestre
[Termes IGN] modèle 3D de l'espace urbain
[Termes IGN] modèle sémantique de données
[Termes IGN] réalité virtuelle
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] vision par ordinateur
[Termes IGN] zone urbaineIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Information about the surrounding environment perceived by the human eye is one of the most important cues enabled by sight. The scientific community has put a great effort throughout time to develop methods for scene acquisition and scene understanding using computer vision techniques. The goal of this thesis is to study geometry in computer vision and its applications. In computer vision, geometry describes the topological structure of the environment. Specifically, it concerns measures such as shape, volume, depth, pose, disparity, motion, and optical flow, all of which are essential cues in scene acquisition and understanding.
This thesis focuses on two primary objectives. The first is to assess the feasibility of creating semantic models of urban areas and public spaces using geometrical features coming from LiDAR sensors. The second objective is to develop a practical Virtual Reality (VR) video representation that supports 6-Degrees-of-Freedom (DoF) head motion parallax using geometric computer vision and machine learning. The thesis’s first contribution is the proposal of semantic segmentation of the 3D LiDAR point cloud and its applications. The ever-growing demand for reliable mapping data, especially in urban environments, has motivated mobile mapping systems’ development. These systems acquire high precision data and, in particular 3D LiDAR point clouds and optical images. A large amount of data and their diversity make data processing a complex task. A complete urban map data processing pipeline has been developed, which annotates 3D LiDAR points with semantic labels. The proposed method is made efficient by combining fast rule-based processing for building and street surface segmentation and super-voxel-based feature extraction and classification for the remaining map elements (cars, pedestrians, trees, and traffic signs). Based on the experiments, the rule-based processing stage provides substantial improvement not only in computational time but also in classification accuracy. Furthermore, two back ends are developed for semantically labeled data that exemplify two important applications: (1) 3D high definition urban map that reconstructs a realistic 3D model using input labeled point cloud, and (2) semantic segmentation of 2D street view images. The second contribution of the thesis is the development of a practical, fast, and robust method to create high-resolution Depth-Augmented Stereo Panoramas (DASP) from a 360-degree VR camera. A novel and complete optical flow-based pipeline is developed, which provides stereo 360-views of a real-world scene with DASP. The system consists of a texture and depth panorama for each eye. A bi-directional flow estimation network is explicitly designed for stitching and stereo depth estimation, which yields state-of-the-art results with a limited run-time budget. The proposed architecture explicitly leverages geometry by getting both optical flow ground-truths. Building architectures that use this knowledge simplifies the learning problem. Moreover, a 6-DoF testbed for immersive content quality assessment is proposed. Modern machine learning techniques have been used to design the proposed architectures addressing many core computer vision problems by exploiting the enriched information coming from 3D scene structures. The architectures proposed in this thesis are practical systems that impact today’s technologies, including autonomous vehicles, virtual reality, augmented reality, robots, and smart-city infrastructures.Note de contenu : 1- Introduction
2- Geometry in Computer Vision
3- Contributions
4- ConclusionNuméro de notice : 28323 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD Thesis : Computing and Electrical Engineering : Tempere, Finland : 2021 DOI : sans En ligne : https://trepo.tuni.fi/handle/10024/131379 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98342
Titre : Learning to map street-side objects using multiple views Type de document : Thèse/HDR Auteurs : Ahmed Samy Nassar, Auteur ; Sébastien Lefèvre, Directeur de thèse ; Jan Dirk Wegner, Directeur de thèse Editeur : Vannes : Université de Bretagne Sud Année de publication : 2021 Importance : 139 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université de Bretagne Sud, spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] cartographie par internet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données multisources
[Termes IGN] estimation de pose
[Termes IGN] géolocalisation
[Termes IGN] graphe
[Termes IGN] image Streetview
[Termes IGN] inventaire
[Termes IGN] mobilier urbain
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Creating inventories of street-side objects and their monitoring in cities is a labor-intensive and costly process. Field workers are known to conduct this process on-site to record properties about the object. These properties can be the location, species, height, and health of a tree as an example. To monitor cities, gathering such information on a large scale becomes challenging. With the abundance of imagery, adequate coverage of a city is achieved from different views provided by online mapping services (e.g., Google Maps and Street View, Mapillary). The availability of such imagery allows efficient creation and updating of inventories of street-side objects status by using computer vision methods such as object detection and multiple object tracking. This thesis aims at detecting and geo-localizing street-side objects, especially trees and street signs, from multiple views using novel deep learning methods. Note de contenu : 1- Introduction
2- Background
3- Multi-view instance matching with learned geometric soft-constraints
4- Simultaneous multi-view instance detection with learned geometric softconstraints
5- GeoGraphV2: Graph-based aerial & street view multi-view object detection with geometric cues end-to-end
6- ConclusionNuméro de notice : 28674 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Université de Bretagne Sud : 2021 Organisme de stage : IRISA DOI : sans En ligne : https://hal.science/tel-03523658 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99920 A graph convolutional network model for evaluating potential congestion spots based on local urban built environments / Kun Qin in Transactions in GIS, Vol 24 n° 5 (October 2020)
[article]
Titre : A graph convolutional network model for evaluating potential congestion spots based on local urban built environments Type de document : Article/Communication Auteurs : Kun Qin, Auteur ; Yuanquan Xu, Auteur ; Chaogui Kang, Auteur ; Mei-Po Kwan, Auteur Année de publication : 2020 Article en page(s) : pp 1382-1401 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse spatio-temporelle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du bâti
[Termes IGN] données GPS
[Termes IGN] graphe
[Termes IGN] image Streetview
[Termes IGN] planification urbaine
[Termes IGN] point d'intérêt
[Termes IGN] taxi
[Termes IGN] trafic routier
[Termes IGN] Wuhan (Chine)
[Termes IGN] zone urbaine denseRésumé : (Auteur) Automatically identifying potential congestion spots in cities has significant practical implications for efficient urban development and management. It requires the ability to examine the relationships between urban built environment features and traffic congestion situations. This article presents a novel and effective approach for achieving the task based on a machine‐learning technique and publicly available street‐view imagery and point‐of‐interest (POI) data. The proposed multiple‐graph‐based convolutional network architecture can: (a) extract essential urban built environment features from street‐view imagery and neighboring POIs; (b) model the spatial dependencies between traffic congestion on road networks via graph convolution; and (c) evaluate the risk level of road intersections to emerging congestion situations based on local built environment features. We apply the model to Wuhan in China, and predict the potential congestion spots across the city. The results confirm that the model prediction is highly consistent (about 85.5%) when compared to the ground‐truth data based on traffic indices derived from a big taxi GPS trajectory dataset. This research enhances the understanding of traffic congestion situations under various geographic, societal, and economic contexts based on easily accessible road, street‐view, and POI datasets at large spatiotemporal scales. Numéro de notice : A2020-702 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12641 Date de publication en ligne : 04/06/2020 En ligne : https://doi.org/10.1111/tgis.12641 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96225
in Transactions in GIS > Vol 24 n° 5 (October 2020) . - pp 1382-1401[article]Local color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county? / Istvan G. Lauko in Geo-spatial Information Science, vol 23 n° 3 (September 2020)
[article]
Titre : Local color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county? Type de document : Article/Communication Auteurs : Istvan G. Lauko, Auteur ; Adam Honts, Auteur ; Jacob Beihoff, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 222 - 236 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] carte de la végétation
[Termes IGN] cartographie urbaine
[Termes IGN] couleur (variable spectrale)
[Termes IGN] densité de la végétation
[Termes IGN] extraction de la végétation
[Termes IGN] gestion urbaine
[Termes IGN] image panoramique
[Termes IGN] image Streetview
[Termes IGN] indicateur environnemental
[Termes IGN] indice de végétation
[Termes IGN] Milwaukee
[Termes IGN] paysage urbain
[Termes IGN] rayonnement proche infrarougeRésumé : (auteur) Measuring the amount of vegetation in a given area on a large scale has long been accomplished using satellite and aerial imaging systems. These methods have been very reliable in measuring vegetation coverage accurately at the top of the canopy, but their capabilities are limited when it comes to identifying green vegetation located beneath the canopy cover. Measuring the amount of urban and suburban vegetation along a street network that is partially beneath the canopy has recently been introduced with the use of Google Street View (GSV) images, made accessible by the Google Street View Image API. Analyzing green vegetation through the use of GSV images can provide a comprehensive representation of the amount of green vegetation found within geographical regions of higher population density, and it facilitates an analysis performed at the street-level. In this paper we propose a fine-tuned color based image filtering and segmentation technique and we use it to define and map an urban green environment index. We deployed this image processing method and, using GSV images as a high-resolution GIS data source, we computed and mapped the green index of Milwaukee County, a 3,082 km2 urban/suburban county in Wisconsin. This approach generates a high-resolution street-level vegetation estimate that may prove valuable in urban planning and management, as well as for researchers investigating the correlation between environmental factors and human health outcomes. Numéro de notice : A2020-563 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10095020.2020.1805367 Date de publication en ligne : 24/08/2020 En ligne : https://doi.org/10.1080/10095020.2020.1805367 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95880
in Geo-spatial Information Science > vol 23 n° 3 (September 2020) . - pp 222 - 236[article]Fine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data / Shivangi Srivastava in International journal of geographical information science IJGIS, vol 34 n° 6 (June 2020)PermalinkGeocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkStreet-Frontage-Net: urban image classification using deep convolutional neural networks / Stephen Law in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkDetecting and mapping traffic signs from Google Street View images using deep learning and GIS / Andrew Campbell in Computers, Environment and Urban Systems, vol 77 (september 2019)PermalinkDeep mapping gentrification in a large Canadian city using deep learning and Google Street View / Lazar Ilic in Plos one, vol 14 n° 3 (March 2019)PermalinkSVM et réseaux neuronaux convolutifs pour la classification de scènes urbaines / Amaury Zarzelli (2017)PermalinkPermalink