Descripteur
Documents disponibles dans cette catégorie (530)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : Where do people look at during multi-scale map tasks? Type de document : Article/Communication Auteurs : Laura Wenclik, Auteur ; Guillaume Touya , Auteur Editeur : Göttingen : Copernicus publications Année de publication : 2023 Collection : AGILE GIScience Series num. vol 4 Projets : LostInZoom / Touya, Guillaume Conférence : AGILE 2023, 26th international AGILE Conference on Geographic Information Science, Spatial data for design 13/06/2023 16/06/2023 Delft Pays-Bas OA Proceedings Importance : n° 51; 7 p. Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] carte interactive
[Termes IGN] oculométrie
[Termes IGN] point de repère
[Termes IGN] translation
[Termes IGN] visualisation multiéchelle
[Termes IGN] zoom
[Vedettes matières IGN] GéovisualisationRésumé : (Auteur) In order to design better pan-scalar maps, i.e. interactive, zoomable, multi-scale maps, we need to understand how they are perceived, understood, processed, manipulated by the users. This paper reports an experiment that uses an eye-tracker to analyse the gaze behaviour of users zooming and panning into a pan-scalar map. The gaze data from the experiment shows how people look at landmarks to locate the new map view after a zoom. We also identified different types of behaviours during a zoom when people stare at the mouse cursor, or during a pan where the gaze follows a landmark while the map translates. Numéro de notice : C2023-009 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : GEOMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/agile-giss-4-51-2023 Date de publication en ligne : 06/06/2023 En ligne : https://doi.org/10.5194/agile-giss-4-51-2023 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103303 Improving deep learning on point cloud by maximizing mutual information across layers / Di Wang in Pattern recognition, vol 131 (November 2022)
[article]
Titre : Improving deep learning on point cloud by maximizing mutual information across layers Type de document : Article/Communication Auteurs : Di Wang, Auteur ; Lulu Tang, Auteur ; Xu Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] entropie de Shannon
[Termes IGN] information sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] transformation géométrique
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DRésumé : (auteur) It is a fundamental and vital task to enhance the perception capability of the point cloud learning network in 3D machine vision applications. Most existing methods utilize feature fusion and geometric transformation to improve point cloud learning without paying enough attention to mining further intrinsic information across multiple network layers. Motivated to improve consistency between hierarchical features and strengthen the perception capability of the point cloud network, we propose exploring whether maximizing the mutual information (MI) across shallow and deep layers is beneficial to improve representation learning on point clouds. A novel design of Maximizing Mutual Information (MMI) Module is proposed, which assists the training process of the main network to capture discriminative features of the input point clouds. Specifically, the MMI-based loss function is employed to constrain the differences of semantic information in two hierarchical features extracted from the shallow and deep layers of the network. Extensive experiments show that our method is generally applicable to point cloud tasks, including classification, shape retrieval, indoor scene segmentation, 3D object detection, and completion, and illustrate the efficacy of our proposed method and its advantages over existing ones. Our source code is available at https://github.com/wendydidi/MMI.git. Numéro de notice : A2022-780 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : https://doi.org/10.1016/j.patcog.2022.108892 Date de publication en ligne : 08/07/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101859
in Pattern recognition > vol 131 (November 2022) . - n° 108892[article]Augmented reality for scene text recognition, visualization and reading to assist visually impaired people / Imene Ouali in Procedia Computer Science, vol 207 (2022)
[article]
Titre : Augmented reality for scene text recognition, visualization and reading to assist visually impaired people Type de document : Article/Communication Auteurs : Imene Ouali, Auteur ; Mohamed Ben Halima, Auteur ; Ali Wali, Auteur Année de publication : 2022 Article en page(s) : pp 158 - 167 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] enquête
[Termes IGN] personne malvoyante
[Termes IGN] réalité augmentée
[Termes IGN] reconnaissance de caractères
[Termes IGN] signalisation routière
[Termes IGN] visualisationRésumé : (auteur) Reading traffic signs while driving a car for visually impaired people and people with visual problems is a very difficult task for them. This task is encountered every day, sometimes incorrect reading of traffic signs can lead to very serious results. In particular, the Arabic language is very difficult, making recognizing and viewing Arabic text a difficult task. In this context, we are looking for an effective solution to remove errors and results that can sometimes end someone's life. This article aims to correctly read traffic signs with Arabic text using augmented reality technology. Our system is composed of three modules. The first is text detection and recognition. The second is Text visualization. The third is Text to speech methods conversion. With this system, the user can have two different results. The first result is visual with much-improved text and enhancement. The second result is sound, he can hear the text aloud. This system is very applicable and effective for daily life. To assess the effectiveness of our work, we offer a survey to a group of visually impaired people to give their opinion on the use of our application. The results have been good for most people. Numéro de notice : A2023-010 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE Nature : Article DOI : 10.1016/j.procs.2022.09.048 Date de publication en ligne : 19/10/2022 En ligne : https://doi.org/10.1016/j.procs.2022.09.048 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102119
in Procedia Computer Science > vol 207 (2022) . - pp 158 - 167[article]Experiencing virtual geographic environment in urban 3D participatory e-planning: A user perspective / Thibaud Chassin in Landscape and Urban Planning, vol 224 (August 2022)
[article]
Titre : Experiencing virtual geographic environment in urban 3D participatory e-planning: A user perspective Type de document : Article/Communication Auteurs : Thibaud Chassin, Auteur ; Jens Ingensand, Auteur ; Sidonie Christophe , Auteur ; Guillaume Touya , Auteur Année de publication : 2022 Projets : 3-projet - voir note / Touya, Guillaume Article en page(s) : n° 104432 Note générale : bibliographie
This study was partly funded by the Computers & Geosciences Research Scholarships co-sponsored by Elsevier and the International Association for Mathematical Geosciences (IAMG).Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] approche participative
[Termes IGN] cognition
[Termes IGN] environnement géographique virtuel
[Termes IGN] projet urbain
[Termes IGN] urbanisme
[Termes IGN] utilisateur
[Termes IGN] visualisation 3DRésumé : (auteur) The adoption of technology in urban participatory planning with tools such as Virtual Geographic Environments (VGE) promises a broader engagement of urban dwellers, which should ultimately lead to the creation of better cities. However, the authorities and urban experts show hesitancy in endorsing these tools in their practices. Indeed, several parameters must be wisely considered in the design of VGE; if misjudged, their impact could be damaging for the participatory approach and the related urban project. The objective of this study is to engage participants (N = 107) with common tasks conducted in participatory sessions, in order to evaluate the users’ performance when manipulating a VGE. We aimed at assessing three crucial parameters: (1) the VGE representation, (2) the participants’ idiosyncrasies, and (3) the nature of the VGE format. The results demonstrate that the parameters did not affect the same aspect of users’ performance in terms of time, inputs, and correctness. The VGE representation impacts only the time needed to fulfill a task. The participants’ idiosyncrasies, namely age, gender and frequency of 3D use also induce an alteration in time, but spatial abilities seem to impact all characteristics of users’ performance, including correctness. Lastly, the nature of the VGE format significantly alters the time and correctness of users interactions. The results of this study highlight concerns about the inadequacies of the current VGE practices in participatory sessions. Moreover, we suggest guidelines to improve the design of VGE, which could enhance urban participatory planning processes, in order to create better cities. Numéro de notice : A2022-439 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE/INFORMATIQUE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.landurbplan.2022.104432 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1016/j.landurbplan.2022.104432 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100758
in Landscape and Urban Planning > vol 224 (August 2022) . - n° 104432[article]Building Information Modelling (BIM) for property valuation: A new approach for Turkish Condominium Ownership / Nida Celik Simsek in Survey review, vol 54 n° 384 (May 2022)
[article]
Titre : Building Information Modelling (BIM) for property valuation: A new approach for Turkish Condominium Ownership Type de document : Article/Communication Auteurs : Nida Celik Simsek, Auteur ; Bayram Uzun, Auteur Année de publication : 2022 Article en page(s) : pp 187 - 208 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Cadastre étranger
[Termes IGN] cadastre étranger
[Termes IGN] évaluation foncière
[Termes IGN] lever cadastral
[Termes IGN] modélisation 3D du bâti BIM
[Termes IGN] propriété foncière
[Termes IGN] Turquie
[Termes IGN] valeur économique
[Termes IGN] visualisation 3DRésumé : (auteur) In Turkey, calculation of the factors affecting the value of the condominium units of a building via 2D architectural project data leads to problems. One of the biggest problem is the land share calculation. The aim of this study was to establish a mechanism by which the properties of the factors affecting the value can be determined mathematically and to arrive at a value-based land share. For this purpose, the study utilized a 3D virtual Building Information Modelling (BIM) model. The value factors and weights were determined via a questionnaire, 3D BIM model of the structure was created, metric values of the factors were calculated and the nominal values of the condominium units were calculated. This study demonstrate that a building nonexistent in the real world can be represented in a virtual environment and comparable information source can be presented to the expert who will carry out the valuation process. Numéro de notice : A2022-354 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/00396265.2021.1905251 Date de publication en ligne : 02/04/2021 En ligne : https://doi.org/10.1080/00396265.2021.1905251 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100552
in Survey review > vol 54 n° 384 (May 2022) . - pp 187 - 208[article]High-performance adaptive texture streaming and rendering of large 3D cities / Alex Zhang in The Visual Computer, vol 38 n° 4 (April 2022)PermalinkA l'aide ! Je me suis perdu en zoomant / Guillaume Touya in Cartes & Géomatique, n° 247-248 (mars-juin 2022)Permalink3D geovisualization for visual analysis of urban climate / Sidonie Christophe in Cybergeo, European journal of geography, vol 2022 ([01/01/2022])PermalinkALEGORIA: Joint multimodal search and spatial navigation into the geographic iconographic heritage / Florent Geniet (2022)PermalinkExplorer la théorie des ancres et les espaces cognitifs dans la cartographie multi-échelle / Maieul Gruget (2022)PermalinkPermalinkDigitizing and visualizing sketch map data: A semi-structured approach to qualitative GIS / Christopher Prener in Cartographica, vol 56 n° 4 (Winter 2021)PermalinkA web GIS-based integration of 3D digital models with linked open data for cultural heritage exploration / Ikrom Nishanbaev in ISPRS International journal of geo-information, vol 10 n° 10 (October 2021)PermalinkMise en place d'un dispositif expérimental numérique pour l'enseignement des risques naturels avec le jeu vidéo Minetest / Jérôme Staub in Cartes & Géomatique, n° 245-246 (septembre - décembre 2021)PermalinkDigital camera calibration for cultural heritage documentation: the case study of a mass digitization project of religious monuments in Cyprus / Evagoras Evagorou in European journal of remote sensing, vol 54 sup 1 (2021)Permalink