Détail de l'auteur
Documents disponibles écrits par cet auteur



Semantic signatures for large-scale visual localization / Li Weng in Multimedia tools and applications, vol inconnu (2020)
![]()
[article]
Titre : Semantic signatures for large-scale visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Valérie Gouet-Brunet
, Auteur ; Bahman Soheilian
, Auteur
Année de publication : 2020 Projets : THINGS2D0 / Gouet-Brunet, Valérie Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] appariement sémantique
[Termes descripteurs IGN] étiquetage sémantique
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] image numérique
[Termes descripteurs IGN] information sémantique
[Termes descripteurs IGN] recherche d'image basée sur le contenu
[Termes descripteurs IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI’18. A more efficient retrieval protocol is presented with additional experiment results. Numéro de notice : A2020-367 Affiliation des auteurs : LaSTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11042-020-08992-6 date de publication en ligne : 07/05/2020 En ligne : https://doi.org/10.1007/s11042-020-08992-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95407
in Multimedia tools and applications > vol inconnu (2020)[article]
Titre : Semantic signatures for urban visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Bahman Soheilian
, Auteur ; Valérie Gouet-Brunet
, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2018 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : CBMI 2018, 16th International Conference on Content-Based Multimedia Indexing 04/09/2018 06/09/2018 La Rochelle France Proceedings IEEE Importance : 6 p. Format : 21 x 30 cm Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] analyse comparative
[Termes descripteurs IGN] base de données localisées
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] information sémantique
[Termes descripteurs IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. In a typical scenario, features are extracted from images captured by cameras and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. They offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. Numéro de notice : C2018-064 Affiliation des auteurs : LaSTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/CBMI.2018.8516492 date de publication en ligne : 15/11/2018 En ligne : https://doi.org/10.1109/CBMI.2018.8516492 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91430
Titre : Cross-domain image localization by adaptive feature fusion Type de document : Article/Communication Auteurs : Neelanjan Bhowmik , Auteur ; Li Weng
, Auteur ; Valérie Gouet-Brunet
, Auteur ; Bahman Soheilian
, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2017 Projets : POEME / Da Silva, Jean-Claude Conférence : JURSE 2017, Joint urban remote sensing event 06/03/2017 08/03/2017 Dubai Emirats Arabes Unis Proceedings IEEE Importance : 4 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] environnement de développement
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] géopositionnement
[Termes descripteurs IGN] modèle de régression
[Termes descripteurs IGN] recherche d'image basée sur le contenu
[Termes descripteurs IGN] recherche d'information géographique
[Termes descripteurs IGN] similitudeRésumé : (auteur) We address the problem of cross-domain image localization, i.e., the ability of estimating the pose of a landmark from visual content acquired under various conditions, such as old photographs, paintings, photos taken at a particular season, etc. We explore a 2D approach where the pose is estimated from geo-localized reference images that visually match the query image. This work focuses on the retrieval of similar images, which is a challenging task for images across different domains. We propose a Content-Based Image Retrieval (CBIR) framework that adaptively combines multiple image descriptions. A regression model is used to select the best feature combinations according to their spatial complementarity, globally for a whole dataset as well as adaptively for each given image. The framework is evaluated on different datasets and the experiments prove its advantage over classical retrieval approaches. Numéro de notice : C2017-028 Affiliation des auteurs : LaSTIG MATIS (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/JURSE.2017.7924572 date de publication en ligne : 11/05/2017 En ligne : https://doi.org/10.1109/JURSE.2017.7924572 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89292
Titre : A feature fusion framework for hashing Type de document : Article/Communication Auteurs : I-Hong Jhuo, Auteur ; Li Weng , Auteur ; Wen-Huang Cheng, Auteur ; D.T. Lee, Auteur
Congrès : ICPR 2016, 23rd International Conference on Pattern Recognition (4 - 8 décembre 2016; Cancun, Mexique), Commanditaire Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2016 Importance : pp 2289 - 2294 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] fusion de données
[Termes descripteurs IGN] graphe
[Termes descripteurs IGN] mesure de similitudeRésumé : (auteur) A hash algorithm converts data into compact strings. In the multimedia domain, effective hashing is the key to large-scale similarity search in high-dimensional feature space. A limit of existing hashing techniques is that they typically use single features. In order to improve search performance, it is necessary to utilize multiple features. Due to the compactness requirement, concatenation of hash values from different features is not an optimal solution. Thus a fusion process is desired. In this paper, we solve the multiple feature fusion problem by a hash bit selection framework. Given multiple features, we derive an n-bit hash value of improved performance compared with hash values of the same length computed from each individual feature. The framework utilizes a feature-independent hash algorithm to generate a sufficient number of bits from each feature, and selects n bits from the hash bit pool by leveraging pair-wise label information. The metric bit reliability is used for ranking the bits. It is estimated by bit-level hypothesis testing. In addition, we also take into account the dependence among bits. A weighted graph is constructed for refined bit selection, where the bit reliability is used as vertex weights and the mutual information among hash bits is used as edge weights. We demonstrate our framework with LSH. Extensive experiments confirm that our method is effective, and outperforms several state-of-the-art methods. Numéro de notice : C2016-042 Affiliation des auteurs : LaSTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICPR.2016.7899977 date de publication en ligne : 24/04/2017 En ligne : https://doi.org/10.1109/ICPR.2016.7899977 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91854