Détail de l'auteur
Auteur Yibing Song |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Unsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)
[article]
Titre : Unsupervised deep representation learning for real-time tracking Type de document : Article/Communication Auteurs : Ning Wang, Auteur ; Wengang Zhou, Auteur ; Yibing Song, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 400 - 418 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] filtre
[Termes IGN] objet mobile
[Termes IGN] oculométrie
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal siamois
[Termes IGN] temps réel
[Termes IGN] traçage
[Termes IGN] trajectoire (véhicule non spatial)
[Termes IGN] vision par ordinateurRésumé : (auteur) The advancement of visual tracking has continuously been brought by deep learning models. Typically, supervised learning is employed to train these models with expensive labeled data. In order to reduce the workload of manual annotation and learn to track arbitrary objects, we propose an unsupervised learning method for visual tracking. The motivation of our unsupervised learning is that a robust tracker should be effective in bidirectional tracking. Specifically, the tracker is able to forward localize a target object in successive frames and backtrace to its initial position in the first frame. Based on such a motivation, in the training process, we measure the consistency between forward and backward trajectories to learn a robust tracker from scratch merely using unlabeled videos. We build our framework on a Siamese correlation filter network, and propose a multi-frame validation scheme and a cost-sensitive loss to facilitate unsupervised learning. Without bells and whistles, the proposed unsupervised tracker achieves the baseline accuracy of classic fully supervised trackers while achieving a real-time speed. Furthermore, our unsupervised framework exhibits a potential in leveraging more unlabeled or weakly labeled data to further improve the tracking accuracy. Numéro de notice : A2021-353 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s11263-020-01357-4 Date de publication en ligne : 21/09/2020 En ligne : https://doi.org/10.1007/s11263-020-01357-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97604
in International journal of computer vision > vol 129 n° 2 (February 2021) . - pp 400 - 418[article]