Descripteur



Etendre la recherche sur niveau(x) vers le bas
Visualization of 3D property data and assessment of the impact of rendering attributes / Stefan Seipel in Journal of Geovisualization and Spatial Analysis, vol 4 n° 2 (December 2020)
![]()
[article]
Titre : Visualization of 3D property data and assessment of the impact of rendering attributes Type de document : Article/Communication Auteurs : Stefan Seipel, Auteur ; Martin Andrée, Auteur ; Karolina Larsson, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : n° 23 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] attribut non spatial
[Termes descripteurs IGN] cadastre 3D
[Termes descripteurs IGN] cadastre étranger
[Termes descripteurs IGN] classification barycentrique
[Termes descripteurs IGN] couleur (rédaction cartographique)
[Termes descripteurs IGN] mesure de similitude
[Termes descripteurs IGN] propriété foncière
[Termes descripteurs IGN] rédaction cartographique
[Termes descripteurs IGN] rendu (géovisualisation)
[Termes descripteurs IGN] saillance
[Termes descripteurs IGN] scène 3D
[Termes descripteurs IGN] Stockholm (Suède)
[Termes descripteurs IGN] visualisation cartographique
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Visualizations of 3D cadastral information incorporating both intrinsically spatial and non-spatial information are examined here. The design of a visualization prototype is linked to real-case 3D property information. In an interview with domain experts, the functional and visual features of the prototype are assessed. The choice of rendering attributes was identified as an important aspect for further analysis. A computational approach to systematic assessment of the consequences of different graphical design choices is proposed. This approach incorporates a colour similarity metric, visual saliency maps, and k-nearest-neighbour (kNN) classification to estimate risks of confusing or overlooking relevant elements in a visualization. The results indicate that transparency is not an independent visual variable, as it affects the apparent colour of 3D objects and makes them inherently more difficult to distinguish. Transparency also influences visual saliency of objects in a scene. The proposed analytic approach was useful for visualization design and revealed that the conscious use of graphical attributes, like combinations of colour, transparency, and line styles, can improve saliency of objects in a 3D scene. Numéro de notice : A2020-796 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s41651-020-00063-6 date de publication en ligne : 26/10/2020 En ligne : https://doi.org/10.1007/s41651-020-00063-6 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96612
in Journal of Geovisualization and Spatial Analysis > vol 4 n° 2 (December 2020) . - n° 23[article]A novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)
![]()
[article]
Titre : A novel deep network and aggregation model for saliency detection Type de document : Article/Communication Auteurs : Ye Liang, Auteur ; Hongzhe Liu, Auteur ; Nan Ma, Auteur Année de publication : 2020 Article en page(s) : pp 1883 - 1895 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] architecture de réseau
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] saillanceRésumé : (auteur) Recent deep learning-based methods for saliency detection have proved the effectiveness of integrating features with different scales. They usually design various complex architectures of network, e.g., multiple networks, to explore the multi-scale information of images, which is expensive in computation and memory. Feature maps produced with different subsampling convolutional layers have different spatial resolutions; therefore, they can be used as the multi-scale features to reduce the costs. In this paper, by exploiting the in-network feature hierarchy of convolutional networks, we propose a novel multi-scale network for saliency detection (MSNSD) consisting of three modules, i.e., bottom-up feature extraction, top-down feature connection and multi-scale saliency prediction. Moreover, to further boost the performance of MSNSD, an input image-aware saliency aggregation method is proposed based on the ridge regression, which combines MSNSD with some well-performed handcrafted shallow models. Extensive experiments on several benchmarks show that the proposed MSNSD outperforms the state-of-the-art saliency methods with less computational and memory complexity. Meanwhile, our aggregation method for saliency detection is effective and efficient to combine deep and shallow models and make them complementary to each other. Numéro de notice : A2020-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-019-01781-9 date de publication en ligne : 09/12/2019 En ligne : https://doi.org/10.1007/s00371-019-01781-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95952
in The Visual Computer > vol 36 n° 9 (September 2020) . - pp 1883 - 1895[article]Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding / Weihua Dong in Cartography and Geographic Information Science, vol 47 n° 3 (May 2020)
![]()
[article]
Titre : Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding Type de document : Article/Communication Auteurs : Weihua Dong, Auteur ; Tong Qin, Auteur ; Hua Liao, Auteur Année de publication : 2020 Article en page(s) : pp 229 - 243 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] analyse visuelle
[Termes descripteurs IGN] interprétation (psychologie)
[Termes descripteurs IGN] oculométrie
[Termes descripteurs IGN] point de repère
[Termes descripteurs IGN] questionnaire
[Termes descripteurs IGN] saillance
[Termes descripteurs IGN] scène intérieure
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] test statistique
[Termes descripteurs IGN] vision
[Termes descripteurs IGN] vision par ordinateur
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Landmark visual salience (characterized by features that contrast with their surroundings and visual peculiarities) and semantic salience (characterized by features with unusual or important meaning and content in the environment) are two important factors that affect an individual’s visual attention during wayfinding. However, empirical evidence regarding which factor dominates visual guidance during indoor wayfinding is rare, especially in real-world environments. In this study, we assumed that semantic salience dominates the guidance of visual attention, which means that semantic salience will correlate with participants’ fixations more significantly than visual salience. Notably, in previous studies, semantic salience was shown to guide visual attention in static images or familiar scenes in a laboratory environment. To validate this assumption, first, we collected the eye movement data of 22 participants as they found their way through a building. We then computed the landmark visual and semantic salience using computer vision models and questionnaires, respectively. Finally, we conducted correlation tests to verify our assumption. The results failed to validate our assumption and show that the role of salience in visual guidance in a real-world wayfinding process is different from the role of salience in perceiving static images or scenes in a laboratory. Visual salience dominates visual attention during indoor wayfinding, but the roles of salience in visual guidance are mixed across different landmark classes and tasks. The results provide new evidence for understanding how pedestrians visually interpret landmark information during real-world indoor wayfinding. Numéro de notice : A2020-169 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/15230406.2019.1697965 date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1080/15230406.2019.1697965 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94841
in Cartography and Geographic Information Science > vol 47 n° 3 (May 2020) . - pp 229 - 243[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 032-2020031 SL Revue Centre de documentation Revues en salle Disponible Saliency-guided single shot multibox detector for target detection in SAR images / Lan Du in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)
![]()
[article]
Titre : Saliency-guided single shot multibox detector for target detection in SAR images Type de document : Article/Communication Auteurs : Lan Du, Auteur ; Lu Li, Auteur ; Di Wei, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 3366 - 3376 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection de cible
[Termes descripteurs IGN] fusion de données
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] saillanceRésumé : (auteur) The single shot multibox detector (SSD), a proposal-free method based on convolutional neural network (CNN), has recently been proposed for target detection and has found applications in synthetic aperture radar (SAR) images. Moreover, the saliency information reflected in the saliency map can highlight the target of interest while suppressing clutter, which is beneficial for better scene understanding. Therefore, in this article, we propose a saliency-guided SSD (S-SSD) for target detection in SAR images, in which we effectively integrate the saliency into the SSD network not only to suggest where to focus on but also to improve the representation capability in complex scenes. The proposed S-SSD contains two separated convolutional backbone subnetwork architectures, one with the original SAR image as input to extract features, and the other with the corresponding saliency map obtained from the modified Itti’s method as input to acquire refined saliency information under supervision. In addition, the dense connection structure, instead of the plain structure used in original SSD, is applied in the two convolutional backbone architectures to utilize multiscale information with fewer parameters. Then, for integrating saliency information to guide the network to emphasize informative regions, multilevel fusion modules are utilized to merge the two streams into a unified framework, thereby making the whole network end-to-end jointly trained. Finally, the convolutional predictors are used to predict targets. The experimental results on the miniSAR real data demonstrate that the proposed S-SSD can achieve better detection performance than state-of-the-art methods. Numéro de notice : A2020-237 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2953936 date de publication en ligne : 11/12/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2953936 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94983
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 5 (May 2020) . - pp 3366 - 3376[article]
Titre : Saliency and Burstiness for Feature Selection in CBIR Type de document : Article/Communication Auteurs : Kamel Guissous , Auteur ; Valérie Gouet-Brunet
, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2020 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : EUVIP 2019, 8th European Workshop on Visual Information Processing 28/10/2019 31/10/2019 Rome Italie Proceedings IEEE Importance : pp 111 - 116 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] analyse d'image orientée objet
[Termes descripteurs IGN] analyse visuelle
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] recherche d'image basée sur le contenu
[Termes descripteurs IGN] saillance
[Termes descripteurs IGN] zone saillante 3DRésumé : (auteur) The paper addresses the problem of visual feature selection in content-based image retrieval (CBIR). We propose to study two strategies: the first one is using visual saliency, that selects the most salient features of the image and the second one exploits burstiness, that detects and processes the repeated visual elements in the image. To detect and describe the visual features in images, we rely on a deep local features approach based on convolutional neural network. The two strategies are evaluated for image retrieval on different datasets, according to two criteria: quality of retrieval and volume of manipulated features. Numéro de notice : C2019-027 Affiliation des auteurs : LaSTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/EUVIP47703.2019.8946126 date de publication en ligne : 02/01/2020 En ligne : https://ieeexplore.ieee.org/document/8946126 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94521 Extracting urban landmarks from geographical datasets using a random forests classifier / Yue Lin in International journal of geographical information science IJGIS, vol 33 n° 12 (December 2019)
PermalinkSaliency-guided deep neural networks for SAR image change detection / Jie Geng in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
PermalinkComparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)
PermalinkPermalinkPermalink