Descripteur
Documents disponibles dans cette catégorie (9)



Etendre la recherche sur niveau(x) vers le bas
Titre : Saliency and Burstiness for Feature Selection in CBIR Type de document : Article/Communication Auteurs : Kamel Guissous , Auteur ; Valérie Gouet-Brunet
, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2020 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : EUVIP 2019, 8th European Workshop on Visual Information Processing 28/10/2019 31/10/2019 Rome Italie Proceedings IEEE Importance : pp 111 - 116 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse visuelle
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] saillance
[Termes IGN] zone saillante 3DRésumé : (auteur) The paper addresses the problem of visual feature selection in content-based image retrieval (CBIR). We propose to study two strategies: the first one is using visual saliency, that selects the most salient features of the image and the second one exploits burstiness, that detects and processes the repeated visual elements in the image. To detect and describe the visual features in images, we rely on a deep local features approach based on convolutional neural network. The two strategies are evaluated for image retrieval on different datasets, according to two criteria: quality of retrieval and volume of manipulated features. Numéro de notice : C2019-027 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/EUVIP47703.2019.8946126 Date de publication en ligne : 02/01/2020 En ligne : https://ieeexplore.ieee.org/document/8946126 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94521 Automatic building rooftop extraction from aerial images via hierarchical RGB-D priors / Shibiao Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
![]()
[article]
Titre : Automatic building rooftop extraction from aerial images via hierarchical RGB-D priors Type de document : Article/Communication Auteurs : Shibiao Xu, Auteur ; Xingjia Pan, Auteur ; Er Li, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 7369 - 7387 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] détection du bâti
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] itération
[Termes IGN] scène urbaine
[Termes IGN] segmentation d'image
[Termes IGN] segmentation hiérarchique
[Termes IGN] toit
[Termes IGN] zone saillante 3DRésumé : (auteur) Accurate building rooftop extraction from high-resolution aerial images is of crucial importance in a wide range of applications. Owing to the varying appearance and large-scale range of scene objects, especially for building rooftops in different scales and heights, single-scale or individual prior-based extraction technique is insufficient in pursuing efficient, generic, and accurate extraction results. The trend toward integrating multiscale or several cue techniques appears to be the best way; thus, such integration is the focus of this paper. We first propose a novel salient rooftop detector integrating four correlative RGB-D priors (depth cue, uniqueness prior, shape prior, and transition surface prior) for improved rooftop extraction to address the preceding complex issues mentioned. Then, these correlative cues are computed from image layers created by our multilevel segmentation and further fused into the state-of-the-art high-order conditional random field (CRF) framework to locate the rooftop. Finally, an iterative optimization strategy is applied for high-quality solving, which can robustly handle varying appearance of building rooftops. Performance evaluations in the SZTAKI-INRIA benchmark data sets show that our method outperforms the traditional color-based algorithm and the original high-order CRF algorithm and its variants. The proposed algorithm is also evaluated and found to produce consistently satisfactory results for various large-scale, real-world data sets. Numéro de notice : A2018-558 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2850972 Date de publication en ligne : 26/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2850972 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91664
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7369 - 7387[article]Hierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)
![]()
[article]
Titre : Hierarchical cellular automata for visual saliency Type de document : Article/Communication Auteurs : Yao Qin, Auteur ; Mengyang Feng, Auteur ; Huchuan Lu, Auteur ; Garrison W. Cottrell, Auteur Année de publication : 2018 Article en page(s) : pp 751 - 770 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] automate cellulaire
[Termes IGN] classification bayesienne
[Termes IGN] réseau neuronal artificiel
[Termes IGN] zone saillante 3DRésumé : (Auteur) Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA)—a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell’s next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-scale integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches. Numéro de notice : A2018-413 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-1062-2 Date de publication en ligne : 23/02/2018 En ligne : https://doi.org/10.1007/s11263-017-1062-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90896
in International journal of computer vision > vol 126 n° 7 (July 2018) . - pp 751 - 770[article]Predicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) / Danna Gurari in International journal of computer vision, vol 126 n° 7 (July 2018)
![]()
[article]
Titre : Predicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) Type de document : Article/Communication Auteurs : Danna Gurari, Auteur ; Kun He, Auteur ; Bo Xiong, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 714 - 730 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection d'objet
[Termes IGN] réalité de terrain
[Termes IGN] segmentation d'image
[Termes IGN] zone saillante 3DRésumé : (Auteur) We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47% of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths. Numéro de notice : A2018-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1065-7 Date de publication en ligne : 05/02/2018 En ligne : https://doi.org/10.1007/s11263-018-1065-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90887
in International journal of computer vision > vol 126 n° 7 (July 2018) . - pp 714 - 730[article]Salient object detection in complex scenes via D-S evidence theory based region classification / Chunlei Yang in The Visual Computer, vol 33 n° 11 (November 2017)
![]()
[article]
Titre : Salient object detection in complex scenes via D-S evidence theory based region classification Type de document : Article/Communication Auteurs : Chunlei Yang, Auteur ; Jiexin Pu, Auteur ; Yongsheng Dong, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 1415 - 1428 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] fusion de données
[Termes IGN] information complexe
[Termes IGN] scène intérieure
[Termes IGN] segmentation d'image
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] zone saillante 3DRésumé : (Auteur) In complex scenes, multiple objects are often concealed in cluttered backgrounds. Their saliency is difficult to be detected by using conventional methods, mainly because single color contrast can not shoulder the mission of saliency measure; other image features should be involved in saliency detection to obtain more accurate results. Using Dempster-Shafer (D-S) evidence theory based region classification, a novel method is presented in this paper. In the proposed framework, depth feature information extracted from a coarse map is employed to generate initial feature evidences which indicate the probabilities of regions belonging to foreground or background. Based on the D-S evidence theory, both uncertainty and imprecision are modeled, and the conflicts between different feature evidences are properly resolved. Moreover, the method can automatically determine the mass functions of the two-stage evidence fusion for region classification. According to the classification result and region relevance, a more precise saliency map can then be generated by manifold ranking. To further improve the detection results, a guided filter is utilized to optimize the saliency map. Both qualitative and quantitative evaluations on three publicly challenging benchmark datasets demonstrate that the proposed method outperforms the contrast state-of-the-art methods, especially for detection in complex scenes. Numéro de notice : A2017-713 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-016-1288-y En ligne : https://doi.org/10.1007/s00371-016-1288-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88094
in The Visual Computer > vol 33 n° 11 (November 2017) . - pp 1415 - 1428[article]PermalinkA novel computer-aided tree species identification method based on burst wind segmentation of 3D bark textures / Alice Ahlem Othmani in Machine Vision and Applications, vol 27 n° 5 (July 2016)
PermalinkImproved salient feature-based approach for automatically separating photosynthetic and nonphotosynthetic components within terrestrial Lidar point cloud data of forest canopies / Lixia Ma in IEEE Transactions on geoscience and remote sensing, vol 54 n° 2 (February 2016)
PermalinkSegmentation of terrestrial laser scanning data using geometry and image information / S. Barnea in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
Permalink