Détail de l'auteur
Auteur Yiting Tao |
Documents disponibles écrits par cet auteur (2)



Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification / Ming Cong in Geocarto international, vol 36 n° 18 ([01/10/2021])
![]()
[article]
Titre : Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification Type de document : Article/Communication Auteurs : Ming Cong, Auteur ; Zhiye Wang, Auteur ; Yiting Tao, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 2065 - 2084 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] chromatopsie
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] échantillonnage d'image
[Termes IGN] filtrage numérique d'image
[Termes IGN] image captée par drone
[Termes IGN] vision
[Termes IGN] vision par ordinateurRésumé : (auteur) Unmanned aerial vehicle remote sensing images need to be precisely and efficiently classified. However, complex ground scenes produced by ultra-high ground resolution, data uniqueness caused by multi-perspective observations, and need for manual labelling make it difficult for current popular deep learning networks to obtain reliable references from heterogeneous samples. To address these problems, this paper proposes an optic nerve microsaccade (ONMS) classification network, developed based on multiple dilated convolution. ONMS first applies a Laplacian of Gaussian filter to find typical features of ground objects and establishes class labels using adaptive clustering. Then, using an image pyramid, multi-scale image data are mapped to the class labels adaptively to generate homologous reliable samples. Finally, an end-to-end multi-scale neural network is applied for classification. Experimental results show that ONMS significantly reduces sample labelling costs while retaining high cognitive performance, classification accuracy, and noise resistance—indicating that it has significant application advantages. Numéro de notice : A2021-707 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2019.1687593 Date de publication en ligne : 07/11/2019 En ligne : https://doi.org/10.1080/10106049.2019.1687593 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98602
in Geocarto international > vol 36 n° 18 [01/10/2021] . - pp 2065 - 2084[article]Unsupervised-restricted deconvolutional neural network for very high resolution remote-sensing image classification / Yiting Tao in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
![]()
[article]
Titre : Unsupervised-restricted deconvolutional neural network for very high resolution remote-sensing image classification Type de document : Article/Communication Auteurs : Yiting Tao, Auteur ; Miaozhong Xu, Auteur ; Fan Zhang, Auteur ; Bo Du, Auteur ; Liangpei Zhang, Auteur Année de publication : 2017 Article en page(s) : pp 6805 - 6823 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage non-dirigé
[Termes IGN] classification pixellaire
[Termes IGN] déconvolution
[Termes IGN] image Geoeye
[Termes IGN] image Quickbird
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) As the acquisition of very high resolution (VHR) satellite images becomes easier owing to technological advancements, ever more stringent requirements are being imposed on automatic image interpretation. Moreover, per-pixel classification has become the focus of research interests in this regard. However, the efficient and effective processing and the interpretation of VHR satellite images remain a critical task. Convolutional neural networks (CNNs) have recently been applied to VHR satellite images with considerable success. However, the prevalent CNN models accept input data of fixed sizes and train the classifier using features extracted directly from the convolutional stages or the fully connected layers, which cannot yield pixel-to-pixel classifications. Moreover, training a CNN model requires large amounts of labeled reference data. These are challenging to obtain because per-pixel labeled VHR satellite images are not open access. In this paper, we propose a framework called the unsupervised-restricted deconvolutional neural network (URDNN). It can solve these problems by learning an end-to-end and pixel-to-pixel classification and handling a VHR classification using a fully convolutional network and a small number of labeled pixels. In URDNN, supervised learning is always under the restriction of unsupervised learning, which serves to constrain and aid supervised training in learning more generalized and abstract feature. To some degree, it will try to reduce the problems of overfitting and undertraining, which arise from the scarcity of labeled training data, and to gain better classification results using fewer training samples. It improves the generality of the classification model. We tested the proposed URDNN on images from the Geoeye and Quickbird sensors and obtained satisfactory results with the highest overall accuracy (OA) achieved as 0.977 and 0.989, respectively. Experiments showed that the combined effects of additional kernels and stages may have produced better results, and two-stage URDNN consistently produced a more stable result. We compared URDNN with four methods and found that with a small ratio of selected labeled data items, it yielded the highest and most stable results, whereas the accuracy values of the other methods quickly decreased. For some categories with fewer training pixels, accuracy for categories from other methods was considerably worse than that in URDNN, with the largest difference reaching almost 10%. Hence, the proposed URDNN can successfully handle the VHR image classification using a small number of labeled pixels. Furthermore, it is more effective than state-of-the-art methods. Numéro de notice : A2017-766 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2734697 En ligne : https://doi.org/10.1109/TGRS.2017.2734697 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88803
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 12 (December 2017) . - pp 6805 - 6823[article]