Détail de l'auteur
Auteur Chunlei Yang |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Salient object detection in complex scenes via D-S evidence theory based region classification / Chunlei Yang in The Visual Computer, vol 33 n° 11 (November 2017)
[article]
Titre : Salient object detection in complex scenes via D-S evidence theory based region classification Type de document : Article/Communication Auteurs : Chunlei Yang, Auteur ; Jiexin Pu, Auteur ; Yongsheng Dong, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 1415 - 1428 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] fusion de données
[Termes IGN] information complexe
[Termes IGN] scène intérieure
[Termes IGN] segmentation d'image
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] zone saillante 3DRésumé : (Auteur) In complex scenes, multiple objects are often concealed in cluttered backgrounds. Their saliency is difficult to be detected by using conventional methods, mainly because single color contrast can not shoulder the mission of saliency measure; other image features should be involved in saliency detection to obtain more accurate results. Using Dempster-Shafer (D-S) evidence theory based region classification, a novel method is presented in this paper. In the proposed framework, depth feature information extracted from a coarse map is employed to generate initial feature evidences which indicate the probabilities of regions belonging to foreground or background. Based on the D-S evidence theory, both uncertainty and imprecision are modeled, and the conflicts between different feature evidences are properly resolved. Moreover, the method can automatically determine the mass functions of the two-stage evidence fusion for region classification. According to the classification result and region relevance, a more precise saliency map can then be generated by manifold ranking. To further improve the detection results, a guided filter is utilized to optimize the saliency map. Both qualitative and quantitative evaluations on three publicly challenging benchmark datasets demonstrate that the proposed method outperforms the contrast state-of-the-art methods, especially for detection in complex scenes. Numéro de notice : A2017-713 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-016-1288-y En ligne : https://doi.org/10.1007/s00371-016-1288-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88094
in The Visual Computer > vol 33 n° 11 (November 2017) . - pp 1415 - 1428[article]