Descripteur
Documents disponibles dans cette catégorie (81)



Etendre la recherche sur niveau(x) vers le bas
Foreground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)
![]()
[article]
Titre : Foreground-aware refinement network for building extraction from remote sensing images Type de document : Article/Communication Auteurs : Zhang Yan, Auteur ; Wang Xiangyu, Auteur ; Zhang Zhongwei, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 731 - 738 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de régions
[Termes IGN] détection du bâti
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] jeu de donnéesRésumé : (auteur) To extract buildings accurately, we propose a foreground-aware refinement network for building extraction. In particular, in order to reduce the false positive of buildings, we design the foreground-aware module using the attention gate block, which effectively suppresses the features of nonbuilding and enhances the sensitivity of the model to buildings. In addition, we introduce the reverse attention mechanism in the detail refinement module. Specifically, this module guides the network to learn to supplement the missing details of the buildings by erasing the currently predicted regions of buildings and achieves more accurate and complete building extraction. To further optimize the network, we design hybrid loss, which combines BCE loss and SSIM loss, to supervise network learning from both pixel and structure layers. Experimental results demonstrate the superiority of our network over state-of-the-art methods in terms of both quantitative metrics and visual quality. Numéro de notice : A2022-842 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00081R2 Date de publication en ligne : 01/11/2022 En ligne : https://doi.org/10.14358/PERS.21-00081R2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102055
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 11 (November 2022) . - pp 731 - 738[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022111 SL Revue Centre de documentation Revues en salle Disponible Interactive visual analytics of moving passenger flocks using massive smart card data / Tong Zhang in Cartography and Geographic Information Science, Vol 49 n° 4 (July 2022)
![]()
[article]
Titre : Interactive visual analytics of moving passenger flocks using massive smart card data Type de document : Article/Communication Auteurs : Tong Zhang, Auteur ; Wei He, Auteur ; Jing Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 354 - 369 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse spatiale
[Termes IGN] analyse visuelle
[Termes IGN] carte à puce
[Termes IGN] données massives
[Termes IGN] mobilité urbaine
[Termes IGN] objet mobile
[Termes IGN] Shenzhen
[Termes IGN] trajet (mobilité)
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Understanding urban mobility patterns is constrained by our limited capabilities to extract and visualize spatio-temporal regularities from large amounts of mobility data. Moving flocks, defined as groups of people traveling along over a pre-defined time duration, can reveal collective moving patterns at aggregated spatio-temporal scales, thereby facilitating the discovery of urban mobility structure and travel demand patterns. In this study, we extend classical trajectory-oriented flock mining algorithms to discover moving flocks of transit passengers, accounting for the constraints of multi-modal transit networks. We develop a map-centered visual analytics approach by integrating the flock mining algorithm with interactive visualization designs of discovered flocks. Novel interactive visualizations are designed and implemented to support the exploration and analyses of discovered moving flocks at different spatial and temporal scales. The visual analytics approach is evaluated using a real-world smart card dataset collected in Shenzhen City, China, validating its applicability in capturing and mapping dynamic mobility patterns over a large metropolitan area. Numéro de notice : A2022-480 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/15230406.2022.2039775 Date de publication en ligne : 09/03/2022 En ligne : https://doi.org/10.1080/15230406.2022.2039775 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100886
in Cartography and Geographic Information Science > Vol 49 n° 4 (July 2022) . - pp 354 - 369[article]Investigating the role of image retrieval for visual localization / Martin Humenberger in International journal of computer vision, vol 130 n° 7 (July 2022)
![]()
[article]
Titre : Investigating the role of image retrieval for visual localization Type de document : Article/Communication Auteurs : Martin Humenberger, Auteur ; Yohann Cabon, Auteur ; Noé Pion, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : 1811 - 1836 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] base de données d'images
[Termes IGN] estimation de pose
[Termes IGN] flou
[Termes IGN] localisation basée image
[Termes IGN] localisation basée vision
[Termes IGN] point de repère
[Termes IGN] précision de localisation
[Termes IGN] Ransac (algorithme)
[Termes IGN] réalité de terrain
[Termes IGN] structure-from-motion
[Termes IGN] vision par ordinateurRésumé : (auteur) Visual localization, i.e., camera pose estimation in a known scene, is a core component of technologies such as autonomous driving and augmented reality. State-of-the-art localization approaches often rely on image retrieval techniques for one of two purposes: (1) provide an approximate pose estimate or (2) determine which parts of the scene are potentially visible in a given query image. It is common practice to use state-of-the-art image retrieval algorithms for both of them. These algorithms are often trained for the goal of retrieving the same landmark under a large range of viewpoint changes which often differs from the requirements of visual localization. In order to investigate the consequences for visual localization, this paper focuses on understanding the role of image retrieval for multiple visual localization paradigms. First, we introduce a novel benchmark setup and compare state-of-the-art retrieval representations on multiple datasets using localization performance as metric. Second, we investigate several definitions of “ground truth” for image retrieval. Using these definitions as upper bounds for the visual localization paradigms, we show that there is still significant room for improvement. Third, using these tools and in-depth analysis, we show that retrieval performance on classical landmark retrieval or place recognition tasks correlates only for some but not all paradigms to localization performance. Finally, we analyze the effects of blur and dynamic scenes in the images. We conclude that there is a need for retrieval approaches specifically designed for localization paradigms. Our benchmark and evaluation protocols are available at https://github.com/naver/kapture-localization. Numéro de notice : A2022-538 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01615-7 Date de publication en ligne : 25/05/2022 En ligne : https://doi.org/10.1007/s11263-022-01615-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101070
in International journal of computer vision > vol 130 n° 7 (July 2022) . - 1811 - 1836[article]An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps / Paweł Cybulski in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)
![]()
[article]
Titre : An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps Type de document : Article/Communication Auteurs : Paweł Cybulski, Auteur Année de publication : 2022 Article en page(s) : n° 273 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse de groupement
[Termes IGN] analyse visuelle
[Termes IGN] carte choroplèthe
[Termes IGN] cartographie animée
[Termes IGN] lecture de carte
[Termes IGN] oculométrie
[Termes IGN] reconnaissance de formes
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] CartologieRésumé : (auteur) Animated cartographic visualization incorporates the concept of geomedia presented in this Special Issue. The presented study aims to examine the effectiveness of spatial pattern and temporal trend recognition on animated choropleth maps. In a controlled laboratory experiment with participants and eye tracking, fifteen animated maps were used to show a different spatial patterns and temporal trends. The participants’ task was to correctly detect the patterns and trends on a choropleth map. The study results show that effective spatial pattern and temporal trend recognition on a choropleth map is related to participants’ visual behavior. Visual attention clustered in the central part of the choropleth map supports effective spatio-temporal relationship recognition. The larger the area covered by the fixation cluster, the higher the probability of correct temporal trend and spatial pattern recognition. However, animated choropleth maps are more suitable for presenting temporal trends than spatial patterns. Understanding the difficulty in the correct recognition of spatio-temporal relationships might be a reason for implementing techniques that support effective visual searches such as highlighting, cartographic redundancy, or interactive tools. For end-users, the presented study reveals the necessity of the application of a specific visual strategy. Focusing on the central part of the map is the most effective strategy for the recognition of spatio-temporal relationships. Numéro de notice : A2022-358 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11050273 Date de publication en ligne : 20/04/2022 En ligne : https://doi.org/10.3390/ijgi11050273 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100571
in ISPRS International journal of geo-information > vol 11 n° 5 (May 2022) . - n° 273[article]Detecting individuals' spatial familiarity with urban environments using eye movement data / Hua Liao in Computers, Environment and Urban Systems, vol 93 (April 2022)
![]()
[article]
Titre : Detecting individuals' spatial familiarity with urban environments using eye movement data Type de document : Article/Communication Auteurs : Hua Liao, Auteur ; Wendi Zhao, Auteur ; Changbo Zhang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 101758 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse visuelle
[Termes IGN] apprentissage automatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] navigation pédestre
[Termes IGN] oculométrie
[Termes IGN] service fondé sur la position
[Termes IGN] zone urbaine
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) The spatial familiarity of environments is an important high-level user context for location-based services (LBS). Knowing users' familiarity level of environments is helpful for enabling context-aware LBS that can automatically adapt information services according to users' familiarity with the environment. Unlike state-of-the-art studies that used questionnaires, sketch maps, mobile phone positioning (GPS) data, and social media data to measure spatial familiarity, this study explored the potential of a new type of sensory data - eye movement data - to infer users' spatial familiarity of environments using a machine learning approach. We collected 38 participants' eye movement data when they were performing map-based navigation tasks in familiar and unfamiliar urban environments. We trained and cross-validated a random forest classifier to infer whether the users were familiar or unfamiliar with the environments (i.e., binary classification). By combining basic statistical features and fixation semantic features, we achieved a best accuracy of 81% in a 10-fold classification and 70% in the leave-one-task-out (LOTO) classification. We found that the pupil diameter, fixation dispersion, saccade duration, fixation count and duration on the map were the most important features for detecting users' spatial familiarity. Our results indicate that detecting users' spatial familiarity from eye tracking data is feasible in map-based navigation and only a few seconds (e.g., 5 s) of eye movement data is sufficient for such detection. These results could be used to develop context-aware LBS that adapt their services to users' familiarity with the environments. Numéro de notice : A2022-121 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2022.101758 Date de publication en ligne : 21/01/2022 En ligne : https://doi.org/10.1016/j.compenvurbsys.2022.101758 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99663
in Computers, Environment and Urban Systems > vol 93 (April 2022) . - n° 101758[article]Exploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)
PermalinkLiDAR-based method for analysing landmark visibility to pedestrians in cities: case study in Kraków, Poland / Krystian Pyka in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)
PermalinkVisual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)
Permalink3D geovisualization for visual analysis of urban climate / Sidonie Christophe in Cybergeo, European journal of geography, vol 2022 ([01/01/2022])
PermalinkEffective triplet mining improves training of multi-scale pooled CNN for image retrieval / Federico Vaccaro in Machine Vision and Applications, vol 33 n° 1 (January 2022)
PermalinkDisaster Image Classification by Fusing Multimodal Social Media Data / Zhiqiang Zou in ISPRS International journal of geo-information, vol 10 n° 10 (October 2021)
PermalinkMapping trajectories and flows: facilitating a human-centered approach to movement data analytics / Somayeh Dodge in Cartography and Geographic Information Science, vol 48 n° 4 (July 2021)
PermalinkEye tracking research in cartography: Looking into the future / Vassilios Krassanakis in ISPRS International journal of geo-information, vol 10 n° 6 (June 2021)
PermalinkSemantic hierarchy emerges in deep generative representations for scene synthesis / Ceyuan Yang in International journal of computer vision, vol 129 n° 5 (May 2021)
PermalinkAccurate assessment of protected area boundaries for land use planning using 3D GIS / Dilek Tezel in Geocarto international, vol 36 n° 1 ([01/01/2021])
Permalink