Descripteur
Documents disponibles dans cette catégorie (356)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models / Diego Marcos in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models Type de document : Article/Communication Auteurs : Diego Marcos, Auteur ; Michele Volpi, Auteur ; Benjamin Kellenberger, Auteur ; Devis Tuia, Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Bade-Wurtemberg (Allemagne)
[Termes IGN] carte d'occupation du sol
[Termes IGN] enrichissement sémantique
[Termes IGN] filtrage numérique d'image
[Termes IGN] image à ultra haute résolution
[Termes IGN] modèle numérique de surface
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) In remote sensing images, the absolute orientation of objects is arbitrary. Depending on an object’s orientation and on a sensor’s flight path, objects of the same semantic class can be observed in different orientations in the same image. Equivariance to rotation, in this context understood as responding with a rotated semantic label map when subject to a rotation of the input image, is therefore a very desirable feature, in particular for high capacity models, such as Convolutional Neural Networks (CNNs). If rotation equivariance is encoded in the network, the model is confronted with a simpler task and does not need to learn specific (and redundant) weights to address rotated versions of the same object class. In this work we propose a CNN architecture called Rotation Equivariant Vector Field Network (RotEqNet) to encode rotation equivariance in the network itself. By using rotating convolutions as building blocks and passing only the values corresponding to the maximally activating orientation throughout the network in the form of orientation encoding vector fields, RotEqNet treats rotated versions of the same object with the same filter bank and therefore achieves state-of-the-art performances even when using very small architectures trained from scratch. We test RotEqNet in two challenging sub-decimeter resolution semantic labeling problems, and show that we can perform better than a standard CNN while requiring one order of magnitude less parameters. Numéro de notice : A2018-491 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.021 Date de publication en ligne : 19/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91227
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018)[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Multi-scale object detection in remote sensing imagery with convolutional neural networks / Zhipeng Deng in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Multi-scale object detection in remote sensing imagery with convolutional neural networks Type de document : Article/Communication Auteurs : Zhipeng Deng, Auteur ; Hao Sun, Auteur ; Shilin Zhou, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 3 - 22 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] aéroport
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] image optique
[Termes IGN] image Sentinel-SAR
[Termes IGN] réseau neuronal convolutif
[Termes IGN] villeRésumé : (Auteur) Automatic detection of multi-class objects in remote sensing images is a fundamental but challenging problem faced for remote sensing image analysis. Traditional methods are based on hand-crafted or shallow-learning-based features with limited representation power. Recently, deep learning algorithms, especially Faster region based convolutional neural networks (FRCN), has shown their much stronger detection power in computer vision field. However, several challenges limit the applications of FRCN in multi-class objects detection from remote sensing images: (1) Objects often appear at very different scales in remote sensing images, and FRCN with a fixed receptive field cannot match the scale variability of different objects; (2) Objects in large-scale remote sensing images are relatively small in size and densely peaked, and FRCN has poor localization performance with small objects; (3) Manual annotation is generally expensive and the available manual annotation of objects for training FRCN are not sufficient in number. To address these problems, this paper proposes a unified and effective method for simultaneously detecting multi-class objects in remote sensing images with large scales variability. Firstly, we redesign the feature extractor by adopting Concatenated ReLU and Inception module, which can increases the variety of receptive field size. Then, the detection is preformed by two sub-networks: a multi-scale object proposal network (MS-OPN) for object-like region generation from several intermediate layers, whose receptive fields match different object scales, and an accurate object detection network (AODN) for object detection based on fused feature maps, which combines several feature maps that enables small and densely packed objects to produce stronger response. For large-scale remote sensing images with limited manual annotations, we use cropped image blocks for training and augment them with re-scalings and rotations. The quantitative comparison results on the challenging NWPU VHR-10 data set, aircraft data set, Aerial-Vehicle data set and SAR-Ship data set show that our method is more accurate than existing algorithms and is effective for multi-modal remote sensing images. Numéro de notice : A2018-488 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.003 Date de publication en ligne : 02/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91224
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 3 - 22[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt A new deep convolutional neural network for fast hyperspectral image classification / Mercedes Eugenia Paoletti in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A new deep convolutional neural network for fast hyperspectral image classification Type de document : Article/Communication Auteurs : Mercedes Eugenia Paoletti, Auteur ; Juan Mario Haut, Auteur ; Javier Plaza, Auteur ; Antonio J. Plaza, Auteur Année de publication : 2018 Article en page(s) : pp 120 - 147 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal
[Termes IGN] données localisées 3D
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Artificial neural networks (ANNs) have been widely used for the analysis of remotely sensed imagery. In particular, convolutional neural networks (CNNs) are gaining more and more attention in this field. CNNs have proved to be very effective in areas such as image recognition and classification, especially for the classification of large sets composed by two-dimensional images. However, their application to multispectral and hyperspectral images faces some challenges, especially related to the processing of the high-dimensional information contained in multidimensional data cubes. This results in a significant increase in computation time. In this paper, we present a new CNN architecture for the classification of hyperspectral images. The proposed CNN is a 3-D network that uses both spectral and spatial information. It also implements a border mirroring strategy to effectively process border areas in the image, and has been efficiently implemented using graphics processing units (GPUs). Our experimental results indicate that the proposed network performs accurately and efficiently, achieving a reduction of the computation time and increasing the accuracy in the classification of hyperspectral images when compared to other traditional ANN techniques. Numéro de notice : A2018-492 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.021 Date de publication en ligne : 06/12/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91235
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 120 - 147[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Pan-sharpening via deep metric learning / Yinghui Xing in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Pan-sharpening via deep metric learning Type de document : Article/Communication Auteurs : Yinghui Xing, Auteur ; Min Wang, Auteur ; Shuyuan Yang, Auteur ; Licheng Jiao, Auteur Année de publication : 2018 Article en page(s) : pp 165 - 183 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] image Quickbird
[Termes IGN] image Worldview
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Neighbors Embedding based pansharpening methods have received increasing interests in recent years. However, image patches do not strictly follow the similar structure in the shallow MultiSpectral (MS) and PANchromatic (PAN) image spaces, consequently leading to a bias to the pansharpening. In this paper, a new deep metric learning method is proposed to learn a refined geometric multi-manifold neighbor embedding, by exploring the hierarchical features of patches via multiple nonlinear deep neural networks. First of all, down-sampled PAN images from different satellites are divided into a large number of training image patches and are then grouped coarsely according to their shallow geometric structures. Afterwards, several Stacked Sparse AutoEncoders (SSAE) with similar structures are separately constructed and trained by these grouped patches. In the fusion, image patches of the source PAN image pass through the networks to extract features for formulating a deep distance metric and thus deriving their geometric labels. Then, patches with the same geometric labels are grouped to form geometric manifolds. Finally, the assumption that MS patches and PAN patches form the same geometric manifolds in two distinct spaces, is cast on geometric groups to formulate geometric multi-manifold embedding for estimating high resolution MS image patches. Some experiments are taken on datasets acquired by different satellites. The experimental results demonstrate that our proposed method can obtain better fusion results than its counterparts in terms of visual results and quantitative evaluations. Numéro de notice : A2018-493 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.016 Date de publication en ligne : 17/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91236
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 165 - 183[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Semantic labeling in very high resolution images via a self-cascaded convolutional neural network / Yoncheng Liu in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Semantic labeling in very high resolution images via a self-cascaded convolutional neural network Type de document : Article/Communication Auteurs : Yoncheng Liu, Auteur ; Bin Fan, Auteur ; Lingfeng Wang, Auteur ; Jun Bai, Auteur ; Shiming Xiang, Auteur ; Chunhong Pan, Auteur Année de publication : 2018 Article en page(s) : pp 78 - 95 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] image à très haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] zone urbaineRésumé : (Auteur) Semantic labeling for very high resolution (VHR) images in urban areas, is of significant importance in a wide range of remote sensing applications. However, many confusing manmade objects and intricate fine-structured objects make it very difficult to obtain both coherent and accurate labeling results. For this challenging task, we propose a novel deep model with convolutional neural networks (CNNs), i.e., an end-to-end self-cascaded network (ScasNet). Specifically, for confusing manmade objects, ScasNet improves the labeling coherence with sequential global-to-local contexts aggregation. Technically, multi-scale contexts are captured on the output of a CNN encoder, and then they are successively aggregated in a self-cascaded manner. Meanwhile, for fine-structured objects, ScasNet boosts the labeling accuracy with a coarse-to-fine refinement strategy. It progressively refines the target objects using the low-level features learned by CNN’s shallow layers. In addition, to correct the latent fitting residual caused by multi-feature fusion inside ScasNet, a dedicated residual correction scheme is proposed. It greatly improves the effectiveness of ScasNet. Extensive experimental results on three public datasets, including two challenging benchmarks, show that ScasNet achieves the state-of-the-art performance. Numéro de notice : A2018-490 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.12.007 Date de publication en ligne : 21/12/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.12.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91226
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 78 - 95[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)PermalinkA 3D convolutional neural network method for land cover classification using LiDAR and multi-temporal Landsat imagery / Zewei Xu in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)PermalinkDeep multi-task learning for a geographically-regularized semantic segmentation of aerial images / Michele Volpi in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)PermalinkEstimation of winter wheat crop growth parameters using time series Sentinel-1A SAR data / P. Kumar in Geocarto international, vol 33 n° 9 (September 2018)PermalinkFusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning / Rui Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 143 (September 2018)PermalinkImage-based synthesis for deep 3D human pose estimation / Grégory Rogez in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkA deep learning approach to DTM extraction from imagery using rule-based training labels / Caroline M. Gevaert in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkA deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification / Zhen Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)PermalinkExploring geo-tagged photos for land cover validation with deep learning / Hanfa Xing in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkHierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkA light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkApplication of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkClassification à très large échelle d’images satellites à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkFusion tardive d’images SPOT 6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkClassifying airborne LiDAR point clouds via deep features learned by a multi-scale convolutional neural network / Ruibin Zhao in International journal of geographical information science IJGIS, vol 32 n° 5-6 (May - June 2018)PermalinkDeep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)PermalinkDo semantic parts emerge in convolutional neural networks? / Abel Gonzalez-Garcia in International journal of computer vision, vol 126 n° 5 (May 2018)PermalinkLarge-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net / Timo Hackel in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 5 (mai 2018)PermalinkBinary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification / Rama Rao Nidamanuri in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)PermalinkCrowdsourcing the character of a place : Character‐level convolutional networks for multilingual geographic text classification / Benjamin Adams in Transactions in GIS, vol 22 n° 2 (April 2018)Permalink