Détail de l'auteur
Documents disponibles écrits par cet auteur (5)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Semantic signatures for large-scale visual localization / Li Weng in Multimedia tools and applications, vol 80 n° 15 (June 2021)
[article]
Titre : Semantic signatures for large-scale visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Valérie Gouet-Brunet , Auteur ; Bahman Soheilian , Auteur Année de publication : 2021 Projets : THINGS2D0 / Gouet-Brunet, Valérie Article en page(s) : pp 22347 - 22372 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement sémantique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image numérique
[Termes IGN] information sémantique
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] segmentation sémantique
[Termes IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI’18. A more efficient retrieval protocol is presented with additional experiment results. Numéro de notice : A2021-787 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11042-020-08992-6 Date de publication en ligne : 07/05/2020 En ligne : https://doi.org/10.1007/s11042-020-08992-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95407
in Multimedia tools and applications > vol 80 n° 15 (June 2021) . - pp 22347 - 22372[article]SUMAC'21: Proceedings of the 3rd Workshop on Structuring and Understanding of Multimedia heritAge Contents / Valérie Gouet-Brunet (2021)
Titre : SUMAC'21: Proceedings of the 3rd Workshop on Structuring and Understanding of Multimedia heritAge Contents Type de document : Actes de congrès Auteurs : Valérie Gouet-Brunet , Éditeur scientifique ; Margarita Khokhlova , Éditeur scientifique ; Ronak Kosti, Éditeur scientifique ; Li Weng , Éditeur scientifique Editeur : New York [Etats-Unis] : Association for computing machinery ACM Année de publication : 2021 Conférence : SUMAC 2021, 3rd workshop on Structuring and Understanding of Multimedia heritAge Contents 20/10/2021 24/10/2021 Chengdu Chine Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] exploration d'images
[Termes IGN] image numérique
[Termes IGN] image numérisée
[Termes IGN] patrimoine culturel
[Termes IGN] recherche d'image basée sur le contenuNuméro de notice : 13912 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Actes nature-HAL : DirectOuvrColl/Actes DOI : 10.1145/3475720 En ligne : https://doi.org/10.1145/3475720 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99053
Titre : Semantic signatures for urban visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Bahman Soheilian , Auteur ; Valérie Gouet-Brunet , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2018 Projets : 2-Pas d'info accessible - article non ouvert / Gouet-Brunet, Valérie Conférence : CBMI 2018, 16th International Conference on Content-Based Multimedia Indexing 04/09/2018 06/09/2018 La Rochelle France Proceedings IEEE Importance : 6 p. Format : 21 x 30 cm Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse comparative
[Termes IGN] base de données localisées
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] information sémantique
[Termes IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. In a typical scenario, features are extracted from images captured by cameras and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. They offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. Numéro de notice : C2018-064 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/CBMI.2018.8516492 Date de publication en ligne : 15/11/2018 En ligne : https://doi.org/10.1109/CBMI.2018.8516492 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91430
Titre : Cross-domain image localization by adaptive feature fusion Type de document : Article/Communication Auteurs : Neelanjan Bhowmik , Auteur ; Li Weng , Auteur ; Valérie Gouet-Brunet , Auteur ; Bahman Soheilian , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2017 Projets : POEME / Da Silva, Jean-Claude Conférence : JURSE 2017, Joint urban remote sensing event 06/03/2017 08/03/2017 Lausanne Suisse Proceedings IEEE Importance : 4 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] appariement d'images
[Termes IGN] environnement de développement
[Termes IGN] estimation de pose
[Termes IGN] géopositionnement
[Termes IGN] modèle de régression
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] recherche d'information géographique
[Termes IGN] similitudeRésumé : (auteur) We address the problem of cross-domain image localization, i.e., the ability of estimating the pose of a landmark from visual content acquired under various conditions, such as old photographs, paintings, photos taken at a particular season, etc. We explore a 2D approach where the pose is estimated from geo-localized reference images that visually match the query image. This work focuses on the retrieval of similar images, which is a challenging task for images across different domains. We propose a Content-Based Image Retrieval (CBIR) framework that adaptively combines multiple image descriptions. A regression model is used to select the best feature combinations according to their spatial complementarity, globally for a whole dataset as well as adaptively for each given image. The framework is evaluated on different datasets and the experiments prove its advantage over classical retrieval approaches. Numéro de notice : C2017-028 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/JURSE.2017.7924572 Date de publication en ligne : 11/05/2017 En ligne : https://doi.org/10.1109/JURSE.2017.7924572 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89292
Titre : A feature fusion framework for hashing Type de document : Article/Communication Auteurs : I-Hong Jhuo, Auteur ; Li Weng , Auteur ; Wen-Huang Cheng, Auteur ; D.T. Lee, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2016 Conférence : ICPR 2016, 23rd International Conference on Pattern Recognition 04/12/2016 08/12/2016 Cancun Mexique Proceedings IEEE Importance : pp 2289 - 2294 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] fusion de données
[Termes IGN] graphe
[Termes IGN] mesure de similitudeRésumé : (auteur) A hash algorithm converts data into compact strings. In the multimedia domain, effective hashing is the key to large-scale similarity search in high-dimensional feature space. A limit of existing hashing techniques is that they typically use single features. In order to improve search performance, it is necessary to utilize multiple features. Due to the compactness requirement, concatenation of hash values from different features is not an optimal solution. Thus a fusion process is desired. In this paper, we solve the multiple feature fusion problem by a hash bit selection framework. Given multiple features, we derive an n-bit hash value of improved performance compared with hash values of the same length computed from each individual feature. The framework utilizes a feature-independent hash algorithm to generate a sufficient number of bits from each feature, and selects n bits from the hash bit pool by leveraging pair-wise label information. The metric bit reliability is used for ranking the bits. It is estimated by bit-level hypothesis testing. In addition, we also take into account the dependence among bits. A weighted graph is constructed for refined bit selection, where the bit reliability is used as vertex weights and the mutual information among hash bits is used as edge weights. We demonstrate our framework with LSH. Extensive experiments confirm that our method is effective, and outperforms several state-of-the-art methods. Numéro de notice : C2016-042 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICPR.2016.7899977 Date de publication en ligne : 24/04/2017 En ligne : https://doi.org/10.1109/ICPR.2016.7899977 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91854