Descripteur
Documents disponibles dans cette catégorie (1703)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Scene classification based on multiscale convolutional neural network / Yanfei Liu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Scene classification based on multiscale convolutional neural network Type de document : Article/Communication Auteurs : Yanfei Liu, Auteur ; Yanfei Zhong, Auteur ; Qianqing Qin, Auteur Année de publication : 2018 Article en page(s) : pp 7109 - 7121 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] image multidimensionnelle
[Termes IGN] image satellite
[Termes IGN] mesure de similitude
[Termes IGN] modèle orienté objetRésumé : (auteur) With the large amount of high-spatial resolution images now available, scene classification aimed at obtaining high-level semantic concepts has drawn great attention. The convolutional neural networks (CNNs), which are typical deep learning methods, have widely been studied to automatically learn features for the images for scene classification. However, scene classification based on CNNs is still difficult due to the scale variation of the objects in remote sensing imagery. In this paper, a multiscale CNN (MCNN) framework is proposed to solve the problem. In MCNN, a network structure containing dual branches of a fixed-scale net (F-net) and a varied-scale net (V-net) is constructed and the parameters are shared by the F-net and V-net. The images and their rescaled images are fed into the F-net and V-net, respectively, allowing us to simultaneously train the shared network weights on multiscale images. Furthermore, to ensure that the features extracted from MCNN are scale invariant, a similarity measure layer is added to MCNN, which forces the two feature vectors extracted from the image and its corresponding rescaled image to be as close as possible in the training phase. To demonstrate the effectiveness of the proposed method, we compared the results obtained using three widely used remote sensing data sets: the UC Merced data set, the aerial image data set, and the google data set of SIRI-WHU. The results confirm that the proposed method performs significantly better than the other state-of-the-art scene classification methods. Numéro de notice : A2018-556 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2848473 Date de publication en ligne : 26/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2848473 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91660
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7109 - 7121[article]Multi-scale object detection in remote sensing imagery with convolutional neural networks / Zhipeng Deng in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Multi-scale object detection in remote sensing imagery with convolutional neural networks Type de document : Article/Communication Auteurs : Zhipeng Deng, Auteur ; Hao Sun, Auteur ; Shilin Zhou, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 3 - 22 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] aéroport
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] image optique
[Termes IGN] image Sentinel-SAR
[Termes IGN] réseau neuronal convolutif
[Termes IGN] villeRésumé : (Auteur) Automatic detection of multi-class objects in remote sensing images is a fundamental but challenging problem faced for remote sensing image analysis. Traditional methods are based on hand-crafted or shallow-learning-based features with limited representation power. Recently, deep learning algorithms, especially Faster region based convolutional neural networks (FRCN), has shown their much stronger detection power in computer vision field. However, several challenges limit the applications of FRCN in multi-class objects detection from remote sensing images: (1) Objects often appear at very different scales in remote sensing images, and FRCN with a fixed receptive field cannot match the scale variability of different objects; (2) Objects in large-scale remote sensing images are relatively small in size and densely peaked, and FRCN has poor localization performance with small objects; (3) Manual annotation is generally expensive and the available manual annotation of objects for training FRCN are not sufficient in number. To address these problems, this paper proposes a unified and effective method for simultaneously detecting multi-class objects in remote sensing images with large scales variability. Firstly, we redesign the feature extractor by adopting Concatenated ReLU and Inception module, which can increases the variety of receptive field size. Then, the detection is preformed by two sub-networks: a multi-scale object proposal network (MS-OPN) for object-like region generation from several intermediate layers, whose receptive fields match different object scales, and an accurate object detection network (AODN) for object detection based on fused feature maps, which combines several feature maps that enables small and densely packed objects to produce stronger response. For large-scale remote sensing images with limited manual annotations, we use cropped image blocks for training and augment them with re-scalings and rotations. The quantitative comparison results on the challenging NWPU VHR-10 data set, aircraft data set, Aerial-Vehicle data set and SAR-Ship data set show that our method is more accurate than existing algorithms and is effective for multi-modal remote sensing images. Numéro de notice : A2018-488 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.003 Date de publication en ligne : 02/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91224
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 3 - 22[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Semantic labeling in very high resolution images via a self-cascaded convolutional neural network / Yoncheng Liu in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Semantic labeling in very high resolution images via a self-cascaded convolutional neural network Type de document : Article/Communication Auteurs : Yoncheng Liu, Auteur ; Bin Fan, Auteur ; Lingfeng Wang, Auteur ; Jun Bai, Auteur ; Shiming Xiang, Auteur ; Chunhong Pan, Auteur Année de publication : 2018 Article en page(s) : pp 78 - 95 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] image à très haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] zone urbaineRésumé : (Auteur) Semantic labeling for very high resolution (VHR) images in urban areas, is of significant importance in a wide range of remote sensing applications. However, many confusing manmade objects and intricate fine-structured objects make it very difficult to obtain both coherent and accurate labeling results. For this challenging task, we propose a novel deep model with convolutional neural networks (CNNs), i.e., an end-to-end self-cascaded network (ScasNet). Specifically, for confusing manmade objects, ScasNet improves the labeling coherence with sequential global-to-local contexts aggregation. Technically, multi-scale contexts are captured on the output of a CNN encoder, and then they are successively aggregated in a self-cascaded manner. Meanwhile, for fine-structured objects, ScasNet boosts the labeling accuracy with a coarse-to-fine refinement strategy. It progressively refines the target objects using the low-level features learned by CNN’s shallow layers. In addition, to correct the latent fitting residual caused by multi-feature fusion inside ScasNet, a dedicated residual correction scheme is proposed. It greatly improves the effectiveness of ScasNet. Extensive experimental results on three public datasets, including two challenging benchmarks, show that ScasNet achieves the state-of-the-art performance. Numéro de notice : A2018-490 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.12.007 Date de publication en ligne : 21/12/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.12.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91226
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 78 - 95[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification Type de document : Article/Communication Auteurs : Wei Han, Auteur ; Ruyi Feng, Auteur ; Lizhe Wang, Auteur ; Yafan Cheng, Auteur Année de publication : 2018 Article en page(s) : pp 23 - 43 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] classification semi-dirigée
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scèneRésumé : (Auteur) High resolution remote sensing (HRRS) image scene classification plays a crucial role in a wide range of applications and has been receiving significant attention. Recently, remarkable efforts have been made to develop a variety of approaches for HRRS scene classification, wherein deep-learning-based methods have achieved considerable performance in comparison with state-of-the-art methods. However, the deep-learning-based methods have faced a severe limitation that a great number of manually-annotated HRRS samples are needed to obtain a reliable model. However, there are still not sufficient annotation datasets in the field of remote sensing. In addition, it is a challenge to get a large scale HRRS image dataset due to the abundant diversities and variations in HRRS images. In order to address the problem, we propose a semi-supervised generative framework (SSGF), which combines the deep learning features, a self-label technique, and a discriminative evaluation method to complete the task of scene classification and annotating datasets. On this basis, we further develop an extended algorithm (SSGA-E) and evaluate it by exclusive experiments. The experimental results show that the SSGA-E outperforms most of the fully-supervised methods and semi-supervised methods. It has achieved the third best accuracy on the UCM dataset, the second best accuracy on the WHU-RS, the NWPU-RESISC45, and the AID datasets. The impressive results demonstrate that the proposed SSGF and the extended method is effective to solve the problem of lacking an annotated HRRS dataset, which can learn valuable information from unlabeled samples to improve classification ability and obtain a reliable annotation dataset for supervised learning. Numéro de notice : A2018-489 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.004 Date de publication en ligne : 14/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91225
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 23 - 43[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images / Michele Volpi in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)
[article]
Titre : Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images Type de document : Article/Communication Auteurs : Michele Volpi, Auteur ; Devis Tuia, Auteur Année de publication : 2018 Article en page(s) : pp 48 - 60 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) When approaching the semantic segmentation of overhead imagery in the decimeter spatial resolution range, successful strategies usually combine powerful methods to learn the visual appearance of the semantic classes (e.g. convolutional neural networks) with strategies for spatial regularization (e.g. graphical models such as conditional random fields). In this paper, we propose a method to learn evidence in the form of semantic class likelihoods, semantic boundaries across classes and shallow-to-deep visual features, each one modeled by a multi-task convolutional neural network architecture. We combine this bottom-up information with top-down spatial regularization encoded by a conditional random field model optimizing the label space across a hierarchy of segments with constraints related to structural, spatial and data-dependent pairwise relationships between regions. Our results show that such strategy provide better regularization than a series of strong baselines reflecting state-of-the-art technologies. The proposed strategy offers a flexible and principled framework to include several sources of visual and structural information, while allowing for different degrees of spatial regularization accounting for priors about the expected output structures. Numéro de notice : A2018-392 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.06.007 Date de publication en ligne : 05/07/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.06.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90826
in ISPRS Journal of photogrammetry and remote sensing > vol 144 (October 2018) . - pp 48 - 60[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018103 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018102 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Towards a polyalgorithm for land use change detection / Rishu Saxena in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)PermalinkAugmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkImage-based synthesis for deep 3D human pose estimation / Grégory Rogez in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkIntegration of ZY3-02 satellite laser altimetry data and stereo images for high-accuracy mapping / Guoyuan Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 9 (September 2018)PermalinkResearch on the estimation model of vegetation water content in halophyte leaves based on the newly developed vegetation indices / Zhe Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 9 (September 2018)PermalinkAdaptive correlation filters with long-term and short-term memory for object tracking / Chao Ma in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkICARE-VEG: A 3D physics-based atmospheric correction method for tree shadows in urban areas / Karine R.M. Adeline in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkRobust detection and affine rectification of planar homogeneous texture for scene understanding / Shahzor Ahmad in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkThree-point-based solution for automated motion parameter estimation of a multi-camera indoor mapping system with planar motion constraint / Fangning He in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkHierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkLabel propagation with ensemble of pairwise geometric relations : towards robust large-scale retrieval of object instances / Xiaomeng Wu in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkPredicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) / Danna Gurari in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkReal-time relative mobile target positioning using GPS-assisted stereo videogrammetry / Bahadir Ergun in Survey review, vol 50 n° 361 (July 2018)PermalinkA review of accuracy assesment for object-based image analysis: from per pixel to per-polygon approaches [review article] / Su Ye in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkApplication of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkForeword to the theme issue on geospatial computer vision / Jan Dirk Wegner in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)PermalinkNo-reference image quality assessment for image auto-denoising / Xiangfei Kong in International journal of computer vision, vol 126 n° 5 (May 2018)PermalinkReal-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras / David Joseph Tan in International journal of computer vision, vol 126 n° 2-4 (April 2018)PermalinkSpace-time tree ensemble for action recognition and localization / Shugao Ma in International journal of computer vision, vol 126 n° 2-4 (April 2018)PermalinkTraitement d’image en Python avec RSGISLib / Anonyme in Géomatique expert, n° 121 (mars - avril 2018)Permalink