ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 145 - part AMention de date : November 2018 Paru le : 01/11/2018 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2018111 | RAB | Revue | Centre de documentation | En réserve L003 | Disponible |
081-2018113 | DEP-EXM | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2018112 | DEP-EAF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierMulti-scale object detection in remote sensing imagery with convolutional neural networks / Zhipeng Deng in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Multi-scale object detection in remote sensing imagery with convolutional neural networks Type de document : Article/Communication Auteurs : Zhipeng Deng, Auteur ; Hao Sun, Auteur ; Shilin Zhou, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 3 - 22 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] aéroport
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] image optique
[Termes IGN] image Sentinel-SAR
[Termes IGN] réseau neuronal convolutif
[Termes IGN] villeRésumé : (Auteur) Automatic detection of multi-class objects in remote sensing images is a fundamental but challenging problem faced for remote sensing image analysis. Traditional methods are based on hand-crafted or shallow-learning-based features with limited representation power. Recently, deep learning algorithms, especially Faster region based convolutional neural networks (FRCN), has shown their much stronger detection power in computer vision field. However, several challenges limit the applications of FRCN in multi-class objects detection from remote sensing images: (1) Objects often appear at very different scales in remote sensing images, and FRCN with a fixed receptive field cannot match the scale variability of different objects; (2) Objects in large-scale remote sensing images are relatively small in size and densely peaked, and FRCN has poor localization performance with small objects; (3) Manual annotation is generally expensive and the available manual annotation of objects for training FRCN are not sufficient in number. To address these problems, this paper proposes a unified and effective method for simultaneously detecting multi-class objects in remote sensing images with large scales variability. Firstly, we redesign the feature extractor by adopting Concatenated ReLU and Inception module, which can increases the variety of receptive field size. Then, the detection is preformed by two sub-networks: a multi-scale object proposal network (MS-OPN) for object-like region generation from several intermediate layers, whose receptive fields match different object scales, and an accurate object detection network (AODN) for object detection based on fused feature maps, which combines several feature maps that enables small and densely packed objects to produce stronger response. For large-scale remote sensing images with limited manual annotations, we use cropped image blocks for training and augment them with re-scalings and rotations. The quantitative comparison results on the challenging NWPU VHR-10 data set, aircraft data set, Aerial-Vehicle data set and SAR-Ship data set show that our method is more accurate than existing algorithms and is effective for multi-modal remote sensing images. Numéro de notice : A2018-488 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.003 Date de publication en ligne : 02/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91224
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 3 - 22[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification Type de document : Article/Communication Auteurs : Wei Han, Auteur ; Ruyi Feng, Auteur ; Lizhe Wang, Auteur ; Yafan Cheng, Auteur Année de publication : 2018 Article en page(s) : pp 23 - 43 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] classification semi-dirigée
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scèneRésumé : (Auteur) High resolution remote sensing (HRRS) image scene classification plays a crucial role in a wide range of applications and has been receiving significant attention. Recently, remarkable efforts have been made to develop a variety of approaches for HRRS scene classification, wherein deep-learning-based methods have achieved considerable performance in comparison with state-of-the-art methods. However, the deep-learning-based methods have faced a severe limitation that a great number of manually-annotated HRRS samples are needed to obtain a reliable model. However, there are still not sufficient annotation datasets in the field of remote sensing. In addition, it is a challenge to get a large scale HRRS image dataset due to the abundant diversities and variations in HRRS images. In order to address the problem, we propose a semi-supervised generative framework (SSGF), which combines the deep learning features, a self-label technique, and a discriminative evaluation method to complete the task of scene classification and annotating datasets. On this basis, we further develop an extended algorithm (SSGA-E) and evaluate it by exclusive experiments. The experimental results show that the SSGA-E outperforms most of the fully-supervised methods and semi-supervised methods. It has achieved the third best accuracy on the UCM dataset, the second best accuracy on the WHU-RS, the NWPU-RESISC45, and the AID datasets. The impressive results demonstrate that the proposed SSGF and the extended method is effective to solve the problem of lacking an annotated HRRS dataset, which can learn valuable information from unlabeled samples to improve classification ability and obtain a reliable annotation dataset for supervised learning. Numéro de notice : A2018-489 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.004 Date de publication en ligne : 14/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91225
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 23 - 43[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Semantic labeling in very high resolution images via a self-cascaded convolutional neural network / Yoncheng Liu in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Semantic labeling in very high resolution images via a self-cascaded convolutional neural network Type de document : Article/Communication Auteurs : Yoncheng Liu, Auteur ; Bin Fan, Auteur ; Lingfeng Wang, Auteur ; Jun Bai, Auteur ; Shiming Xiang, Auteur ; Chunhong Pan, Auteur Année de publication : 2018 Article en page(s) : pp 78 - 95 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] image à très haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] zone urbaineRésumé : (Auteur) Semantic labeling for very high resolution (VHR) images in urban areas, is of significant importance in a wide range of remote sensing applications. However, many confusing manmade objects and intricate fine-structured objects make it very difficult to obtain both coherent and accurate labeling results. For this challenging task, we propose a novel deep model with convolutional neural networks (CNNs), i.e., an end-to-end self-cascaded network (ScasNet). Specifically, for confusing manmade objects, ScasNet improves the labeling coherence with sequential global-to-local contexts aggregation. Technically, multi-scale contexts are captured on the output of a CNN encoder, and then they are successively aggregated in a self-cascaded manner. Meanwhile, for fine-structured objects, ScasNet boosts the labeling accuracy with a coarse-to-fine refinement strategy. It progressively refines the target objects using the low-level features learned by CNN’s shallow layers. In addition, to correct the latent fitting residual caused by multi-feature fusion inside ScasNet, a dedicated residual correction scheme is proposed. It greatly improves the effectiveness of ScasNet. Extensive experimental results on three public datasets, including two challenging benchmarks, show that ScasNet achieves the state-of-the-art performance. Numéro de notice : A2018-490 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.12.007 Date de publication en ligne : 21/12/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.12.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91226
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 78 - 95[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models / Diego Marcos in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models Type de document : Article/Communication Auteurs : Diego Marcos, Auteur ; Michele Volpi, Auteur ; Benjamin Kellenberger, Auteur ; Devis Tuia, Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Bade-Wurtemberg (Allemagne)
[Termes IGN] carte d'occupation du sol
[Termes IGN] enrichissement sémantique
[Termes IGN] filtrage numérique d'image
[Termes IGN] image à ultra haute résolution
[Termes IGN] modèle numérique de surface
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) In remote sensing images, the absolute orientation of objects is arbitrary. Depending on an object’s orientation and on a sensor’s flight path, objects of the same semantic class can be observed in different orientations in the same image. Equivariance to rotation, in this context understood as responding with a rotated semantic label map when subject to a rotation of the input image, is therefore a very desirable feature, in particular for high capacity models, such as Convolutional Neural Networks (CNNs). If rotation equivariance is encoded in the network, the model is confronted with a simpler task and does not need to learn specific (and redundant) weights to address rotated versions of the same object class. In this work we propose a CNN architecture called Rotation Equivariant Vector Field Network (RotEqNet) to encode rotation equivariance in the network itself. By using rotating convolutions as building blocks and passing only the values corresponding to the maximally activating orientation throughout the network in the form of orientation encoding vector fields, RotEqNet treats rotated versions of the same object with the same filter bank and therefore achieves state-of-the-art performances even when using very small architectures trained from scratch. We test RotEqNet in two challenging sub-decimeter resolution semantic labeling problems, and show that we can perform better than a standard CNN while requiring one order of magnitude less parameters. Numéro de notice : A2018-491 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.021 Date de publication en ligne : 19/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91227
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018)[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt A new deep convolutional neural network for fast hyperspectral image classification / Mercedes Eugenia Paoletti in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A new deep convolutional neural network for fast hyperspectral image classification Type de document : Article/Communication Auteurs : Mercedes Eugenia Paoletti, Auteur ; Juan Mario Haut, Auteur ; Javier Plaza, Auteur ; Antonio J. Plaza, Auteur Année de publication : 2018 Article en page(s) : pp 120 - 147 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal
[Termes IGN] données localisées 3D
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Artificial neural networks (ANNs) have been widely used for the analysis of remotely sensed imagery. In particular, convolutional neural networks (CNNs) are gaining more and more attention in this field. CNNs have proved to be very effective in areas such as image recognition and classification, especially for the classification of large sets composed by two-dimensional images. However, their application to multispectral and hyperspectral images faces some challenges, especially related to the processing of the high-dimensional information contained in multidimensional data cubes. This results in a significant increase in computation time. In this paper, we present a new CNN architecture for the classification of hyperspectral images. The proposed CNN is a 3-D network that uses both spectral and spatial information. It also implements a border mirroring strategy to effectively process border areas in the image, and has been efficiently implemented using graphics processing units (GPUs). Our experimental results indicate that the proposed network performs accurately and efficiently, achieving a reduction of the computation time and increasing the accuracy in the classification of hyperspectral images when compared to other traditional ANN techniques. Numéro de notice : A2018-492 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.021 Date de publication en ligne : 06/12/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91235
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 120 - 147[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Pan-sharpening via deep metric learning / Yinghui Xing in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Pan-sharpening via deep metric learning Type de document : Article/Communication Auteurs : Yinghui Xing, Auteur ; Min Wang, Auteur ; Shuyuan Yang, Auteur ; Licheng Jiao, Auteur Année de publication : 2018 Article en page(s) : pp 165 - 183 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] image Quickbird
[Termes IGN] image Worldview
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Neighbors Embedding based pansharpening methods have received increasing interests in recent years. However, image patches do not strictly follow the similar structure in the shallow MultiSpectral (MS) and PANchromatic (PAN) image spaces, consequently leading to a bias to the pansharpening. In this paper, a new deep metric learning method is proposed to learn a refined geometric multi-manifold neighbor embedding, by exploring the hierarchical features of patches via multiple nonlinear deep neural networks. First of all, down-sampled PAN images from different satellites are divided into a large number of training image patches and are then grouped coarsely according to their shallow geometric structures. Afterwards, several Stacked Sparse AutoEncoders (SSAE) with similar structures are separately constructed and trained by these grouped patches. In the fusion, image patches of the source PAN image pass through the networks to extract features for formulating a deep distance metric and thus deriving their geometric labels. Then, patches with the same geometric labels are grouped to form geometric manifolds. Finally, the assumption that MS patches and PAN patches form the same geometric manifolds in two distinct spaces, is cast on geometric groups to formulate geometric multi-manifold embedding for estimating high resolution MS image patches. Some experiments are taken on datasets acquired by different satellites. The experimental results demonstrate that our proposed method can obtain better fusion results than its counterparts in terms of visual results and quantitative evaluations. Numéro de notice : A2018-493 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.016 Date de publication en ligne : 17/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91236
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 165 - 183[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt