Détail de l'auteur
Auteur Jun Zhou |
Documents disponibles écrits par cet auteur (4)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Semi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)
[article]
Titre : Semi-supervised joint learning for hand gesture recognition from a single color image Type de document : Article/Communication Auteurs : Chi Xu, Auteur ; Yunkai Jiang, Auteur ; Jun Zhou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1007 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] détection d'objet
[Termes IGN] estimation de pose
[Termes IGN] image en couleur
[Termes IGN] jeu de données
[Termes IGN] reconnaissance de gestesRésumé : (auteur) Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task. Numéro de notice : A2021-160 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21031007 Date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21031007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97076
in Sensors > vol 21 n° 3 (February 2021) . - n° 1007[article]Conditional random field and deep feature learning for hyperspectral image classification / Fahim Irfan Alam in IEEE Transactions on geoscience and remote sensing, vol 57 n° 3 (March 2019)
[article]
Titre : Conditional random field and deep feature learning for hyperspectral image classification Type de document : Article/Communication Auteurs : Fahim Irfan Alam, Auteur ; Jun Zhou, Auteur ; Alan Wee-Chung Liew, Auteur ; Xiuping Jia, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 1612 - 1628 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multibande
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déconvolution
[Termes IGN] données localisées 3D
[Termes IGN] image hyperspectrale
[Termes IGN] voxelRésumé : (Auteur) Image classification is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, a convolutional neural network (CNN) has established itself as a powerful model in classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the classification performance. In this paper, we propose a method to classify hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral band groups to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of 3-D data cubes. Furthermore, we introduce a deep deconvolution network that improves the final classification performance. We also introduced a new data set and experimented our proposed method on it along with several widely adopted benchmark data sets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method. Numéro de notice : A2019-131 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2867679 Date de publication en ligne : 20/09/2018 En ligne : https://doi.org/10.1109/TGRS.2018.2867679 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92461
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 3 (March 2019) . - pp 1612 - 1628[article]Dictionary learning-based feature-level domain adaptation for cross-scene hyperspectral image classification / Minchao Ye in IEEE Transactions on geoscience and remote sensing, vol 55 n° 3 (March 2017)
[article]
Titre : Dictionary learning-based feature-level domain adaptation for cross-scene hyperspectral image classification Type de document : Article/Communication Auteurs : Minchao Ye, Auteur ; Yuntao Qian, Auteur ; Jun Zhou, Auteur ; Yuan Yan Tang, Auteur Année de publication : 2017 Article en page(s) : pp 1544 - 1562 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] classification dirigée
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] occupation du sol
[Termes IGN] régression logistiqueRésumé : (Auteur) A big challenge of hyperspectral image (HSI) classification is the small size of labeled pixels for training classifier. In real remote sensing applications, we always face the situation that an HSI scene is not labeled at all, or is with very limited number of labeled pixels, but we have sufficient labeled pixels in another HSI scene with the similar land cover classes. In this paper, we try to classify an HSI scene containing no labeled sample or only a few labeled samples with the help of a similar HSI scene having a relative large size of labeled samples. The former scene is defined as the target scene, while the latter one is the source scene. We name this classification problem as cross-scene classification. The main challenge of cross-scene classification is spectral shift, i.e., even for the same class in different scenes, their spectral distributions maybe have significant deviation. As all or most training samples are drawn from the source scene, while the prediction is performed in the target scene, the difference in spectral distribution would greatly deteriorate the classification performance. To solve this problem, we propose a dictionary learning-based feature-level domain adaptation technique, which aligns the spectral distributions between source and target scenes by projecting their spectral features into a shared low-dimensional embedding space by multitask dictionary learning. The basis atoms in the learned dictionary represent the common spectral components, which span a cross-scene feature space to minimize the effect of spectral shift. After the HSIs of two scenes are transformed into the shared space, any traditional HSI classification approach can be used. In this paper, sparse logistic regression (SRL) is selected as the classifier. Especially, if there are a few labeled pixels in the target domain, multitask SRL is used to further promote the classification performance. The experimental results on synthetic and real HSIs show the advantages of the proposed method for cross-scene classification. Numéro de notice : A2017-157 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2627042 En ligne : http://dx.doi.org/10.1109/TGRS.2016.2627042 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84694
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 3 (March 2017) . - pp 1544 - 1562[article]A manifold alignment approach for hyperspectral image visualization with natural color / Danping Liao in IEEE Transactions on geoscience and remote sensing, vol 54 n° 6 (June 2016)
[article]
Titre : A manifold alignment approach for hyperspectral image visualization with natural color Type de document : Article/Communication Auteurs : Danping Liao, Auteur ; Yuntao Qian, Auteur ; Jun Zhou, Auteur ; Yuan Yan Tang, Auteur Année de publication : 2016 Article en page(s) : pp 3151 - 3162 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] alignement semi-dirigé
[Termes IGN] appariement de points
[Termes IGN] couleur (variable spectrale)
[Termes IGN] image à haute résolution
[Termes IGN] image en couleur
[Termes IGN] image hyperspectraleRésumé : (Auteur) The trichromatic visualization of hundreds of bands in a hyperspectral image (HSI) has been an active research topic. The visualized image shall convey as much information as possible from the original data and facilitate easy image interpretation. However, most existing methods display HSIs in false color, which contradicts with user experience and expectation. In this paper, we propose a new framework for visualizing an HSI with natural color by the fusion of an HSI and a high-resolution color image via manifold alignment. Manifold alignment projects several data sets to a shared embedding space where the matching points between them are pairwise aligned. The embedding space bridges the gap between the high-dimensional spectral space of the HSI and the RGB space of the color image, making it possible to transfer natural color and spatial information in the color image to the HSI. In this way, a visualized image with natural color distribution and fine spatial details can be generated. Another advantage of the proposed method is its flexible data setting for various scenarios. As our approach only needs to search a limited number of matching pixel pairs that present the same object, the HSI and the color image can be captured from the same or semantically similar sites. Moreover, the learned projection function from the hyperspectral data space to the RGB space can be directly applied to other HSIs acquired by the same sensor to achieve a quick overview. Our method is also able to visualize user-specified bands as natural color images, which is very helpful for users to scan bands of interest. Numéro de notice : A2016-849 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2015.2512659 En ligne : http://dx.doi.org/10.1109/TGRS.2015.2512659 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=82930
in IEEE Transactions on geoscience and remote sensing > vol 54 n° 6 (June 2016) . - pp 3151 - 3162[article]