Détail de l'auteur
Auteur Qianqing Qin |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Scene classification based on multiscale convolutional neural network / Yanfei Liu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Scene classification based on multiscale convolutional neural network Type de document : Article/Communication Auteurs : Yanfei Liu, Auteur ; Yanfei Zhong, Auteur ; Qianqing Qin, Auteur Année de publication : 2018 Article en page(s) : pp 7109 - 7121 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] image multidimensionnelle
[Termes IGN] image satellite
[Termes IGN] mesure de similitude
[Termes IGN] modèle orienté objetRésumé : (auteur) With the large amount of high-spatial resolution images now available, scene classification aimed at obtaining high-level semantic concepts has drawn great attention. The convolutional neural networks (CNNs), which are typical deep learning methods, have widely been studied to automatically learn features for the images for scene classification. However, scene classification based on CNNs is still difficult due to the scale variation of the objects in remote sensing imagery. In this paper, a multiscale CNN (MCNN) framework is proposed to solve the problem. In MCNN, a network structure containing dual branches of a fixed-scale net (F-net) and a varied-scale net (V-net) is constructed and the parameters are shared by the F-net and V-net. The images and their rescaled images are fed into the F-net and V-net, respectively, allowing us to simultaneously train the shared network weights on multiscale images. Furthermore, to ensure that the features extracted from MCNN are scale invariant, a similarity measure layer is added to MCNN, which forces the two feature vectors extracted from the image and its corresponding rescaled image to be as close as possible in the training phase. To demonstrate the effectiveness of the proposed method, we compared the results obtained using three widely used remote sensing data sets: the UC Merced data set, the aerial image data set, and the google data set of SIRI-WHU. The results confirm that the proposed method performs significantly better than the other state-of-the-art scene classification methods. Numéro de notice : A2018-556 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2848473 Date de publication en ligne : 26/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2848473 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91660
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7109 - 7121[article]