Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (336)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A novel intelligent classification method for urban green space based on high-resolution remote sensing images / Zhiyu Xu in Remote sensing, vol 12 n° 22 (December-1 2020)
[article]
Titre : A novel intelligent classification method for urban green space based on high-resolution remote sensing images Type de document : Article/Communication Auteurs : Zhiyu Xu, Auteur ; Yi Zhou, Auteur ; Shixin Wang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : n° 3845 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse comparative
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] espace vert
[Termes IGN] image à haute résolution
[Termes IGN] image Gaofen
[Termes IGN] milieu urbain
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] Pékin (Chine)
[Termes IGN] phénologie
[Termes IGN] précision de la classification
[Termes IGN] urbanismeRésumé : (auteur) The real-time, accurate, and refined monitoring of urban green space status information is of great significance in the construction of urban ecological environment and the improvement of urban ecological benefits. The high-resolution technology can provide abundant information of ground objects, which makes the information of urban green surface more complicated. The existing classification methods are challenging to meet the classification accuracy and automation requirements of high-resolution images. This paper proposed a deep learning classification method for urban green space based on phenological features constraints in order to make full use of the spectral and spatial information of green space provided by high-resolution remote sensing images (GaoFen-2) in different periods. The vegetation phenological features were added as auxiliary bands to the deep learning network for training and classification. We used the HRNet (High-Resolution Network) as our model and introduced the Focal Tversky Loss function to solve the sample imbalance problem. The experimental results show that the introduction of phenological features into HRNet model training can effectively improve urban green space classification accuracy by solving the problem of misclassification of evergreen and deciduous trees. The improvement rate of F1-Score of deciduous trees, evergreen trees, and grassland were 0.48%, 4.77%, and 3.93%, respectively, which proved that the combination of vegetation phenology and high-resolution remote sensing image can improve the results of deep learning urban green space classification. Numéro de notice : A2020-792 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs12223845 Date de publication en ligne : 23/11/2020 En ligne : https://doi.org/10.3390/rs12223845 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96565
in Remote sensing > vol 12 n° 22 (December-1 2020) . - n° 3845[article]Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss / Xianwei Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
[article]
Titre : Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss Type de document : Article/Communication Auteurs : Xianwei Zheng, Auteur ; Linxi Huan, Auteur ; Gui-Song Xia, Auteur ; Jianya Gong, Auteur Année de publication : 2020 Article en page(s) : pp 15-28 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification basée sur les régions
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contour
[Termes IGN] image à très haute résolution
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task in urban scene understanding. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address these issues uniformly, we propose a standalone end-to-end edge-aware neural network (EaNet) for urban scene semantic segmentation. For semantic consistency preservation inside objects, the EaNet model incorporates a large kernel pyramid pooling (LKPP) module to capture rich multi-scale context with strong continuous feature relations. To effectively separate confusing objects with sharp contours, a Dice-based edge-aware loss function (EA loss) is devised to guide the EaNet to refine both the pixel- and image-level edge information directly from semantic segmentation prediction. In the proposed EaNet model, the LKPP and the EA loss couple to enable comprehensive feature learning across an entire semantic object. Extensive experiments on three challenging datasets demonstrate that our method can be readily generalized to multi-scale ground/aerial urban scene images, achieving 81.7% in mIoU on Cityscapes Test set and 90.8% in the mean F1-score on the ISPRS Vaihingen 2D Test set. Code is available at: https://github.com/geovsion/EaNet. Numéro de notice : A2020-703 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.019 Date de publication en ligne : 14/10/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96228
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 15-28[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2020121 RAB Revue Centre de documentation En réserve L003 Disponible Understanding the synergies of deep learning and data fusion of multispectral and panchromatic high resolution commercial satellite imagery for automated ice-wedge polygon detection / Chandi Witharana in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
[article]
Titre : Understanding the synergies of deep learning and data fusion of multispectral and panchromatic high resolution commercial satellite imagery for automated ice-wedge polygon detection Type de document : Article/Communication Auteurs : Chandi Witharana, Auteur ; Md Abul Ehsan Bhuiyan, Auteur ; Anna K. Liljedahl, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 174-191 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de fusion
[Termes IGN] apprentissage profond
[Termes IGN] Arctique
[Termes IGN] artefact
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] fusion d'images
[Termes IGN] glace
[Termes IGN] image à haute résolution
[Termes IGN] pergélisol
[Termes IGN] texture d'imageRésumé : (Auteur) The utility of sheer volumes of very high spatial resolution (VHSR) commercial imagery in mapping the Arctic region is new and actively evolving. Commercial satellite sensors typically record image data in low-resolution multispectral (MS) and high-resolution panchromatic (PAN) mode. Spatial resolution is needed to accurately describe feature shapes and textural patterns, such as ice-wedge polygons (IWPs) that are rapidly transforming surface features due to degrading permafrost, while spectral resolution allows capturing of land-use and land-cover types. Data fusion, the process of combining PAN and MS images with complementary characteristics often serves as an integral component of remote sensing mapping workflows. The fusion process generates spectral and spatial artifacts that may affect the classification accuracies of subsequent automated image analysis algorithms, such as deep learning (DL) convolutional neural nets (CNN). We employed a detailed multidimensional assessment to understand the performances of an array of eight application-oriented data fusion algorithms when applied to VHSR image scenes for DLCNN-based mapping of ice-wedge polygons. Our findings revealed the scene dependency of data fusion algorithms and emphasized the need for careful selection of the proper algorithm. Results suggested that the fusion algorithms that preserve spatial character of original PAN imagery favor the DLCNN model performances. The choice of fusion approach needs to be considered of equal importance to the required training dataset for successful applications using DLCNN on VHRS imagery in order to enable an accurate mapping effort of permafrost thaw across the Arctic region. Numéro de notice : A2020-705 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.10.010 Date de publication en ligne : 01/11/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.10.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96232
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 174-191[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2020121 RAB Revue Centre de documentation En réserve L003 Disponible Unsupervised deep joint segmentation of multitemporal high-resolution images / Sudipan Saha in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
[article]
Titre : Unsupervised deep joint segmentation of multitemporal high-resolution images Type de document : Article/Communication Auteurs : Sudipan Saha, Auteur ; Lichao Mou, Auteur ; Chunping Qiu, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8780 - 8792 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de données
[Termes IGN] image à haute résolution
[Termes IGN] image à très haute résolution
[Termes IGN] image multitemporelle
[Termes IGN] itération
[Termes IGN] segmentation sémantiqueRésumé : (auteur) High/very-high-resolution (HR/VHR) multitemporal images are important in remote sensing to monitor the dynamics of the Earth’s surface. Unsupervised object-based image analysis provides an effective solution to analyze such images. Image semantic segmentation assigns pixel labels from meaningful object groups and has been extensively studied in the context of single-image analysis, however not explored for multitemporal one. In this article, we propose to extend supervised semantic segmentation to the unsupervised joint semantic segmentation of multitemporal images. We propose a novel method that processes multitemporal images by separately feeding to a deep network comprising of trainable convolutional layers. The training process does not involve any external label, and segmentation labels are obtained from the argmax classification of the final layer. A novel loss function is used to detect object segments from individual images as well as establish a correspondence between distinct multitemporal segments. Multitemporal semantic labels and weights of the trainable layers are jointly optimized in iterations. We tested the method on three different HR/VHR data sets from Munich, Paris, and Trento, which shows the method to be effective. We further extended the proposed joint segmentation method for change detection (CD) and tested on a VHR multisensor data set from Trento. Numéro de notice : A2020-744 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2990640 Date de publication en ligne : 11/05/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2990640 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96375
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8780 - 8792[article]Bayesian-deep-learning estimation of earthquake location from single-station observations / S. Mostafa Mousavi in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
[article]
Titre : Bayesian-deep-learning estimation of earthquake location from single-station observations Type de document : Article/Communication Auteurs : S. Mostafa Mousavi, Auteur ; Gregory C. Beroza, Auteur Année de publication : 2020 Article en page(s) : pp 8211 - 8224 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement du signal
[Termes IGN] apprentissage profond
[Termes IGN] classification bayesienne
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du signal
[Termes IGN] épicentre
[Termes IGN] estimation bayesienne
[Termes IGN] onde sismique
[Termes IGN] régression
[Termes IGN] séisme
[Termes IGN] station d'observation
[Termes IGN] surveillance géologique
[Termes IGN] temps de propagationRésumé : (auteur) We present a deep-learning method for a single-station earthquake location, which we approach as a regression problem using two separate Bayesian neural networks. We use a multitask temporal convolutional neural network to learn epicentral distance and P travel time from 1-min seismograms. The network estimates epicentral distance and P travel time with mean errors of 0.23 km and 0.03 s and standard deviations of 5.42 km and 0.66 s, respectively, along with their epistemic and aleatory uncertainties. We design a separate multi-input network using standard convolutional layers to estimate the back-azimuth angle and its epistemic uncertainty. This network estimates the direction from which seismic waves arrive at the station with a mean error of 1°. Using this information, we estimate the epicenter, origin time, and depth along with their confidence intervals. We use a global data set of earthquake signals recorded within 1° (~112 km) from the event to build the model and demonstrate its performance. Our model can predict epicenter, origin time, and depth with mean errors of 7.3 km, 0.4 s, and 6.7 km, respectively, at different locations around the world. Our approach can be used for fast earthquake source characterization with a limited number of observations and also for estimating the location of earthquakes that are sparsely recorded—either because they are small or because stations are widely separated. Numéro de notice : A2020-684 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2988770 Date de publication en ligne : 06/05/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2988770 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96209
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 11 (November 2020) . - pp 8211 - 8224[article]High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network / Fengpeng Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)PermalinkRiver ice segmentation with deep learning / Abhineet Singh in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)Permalink3D hand mesh reconstruction from a monocular RGB image / Hao Peng in The Visual Computer, vol 36 n° 10 - 12 (October 2020)PermalinkApplication of convolutional and recurrent neural networks for buried threat detection using ground penetrating radar data / Mahdi Moalla in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)PermalinkChoosing an appropriate training set size when using existing data to train neural networks for land cover segmentation / Huan Ning in Annals of GIS, vol 26 n° 4 (October 2020)PermalinkExploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)PermalinkA graph convolutional network model for evaluating potential congestion spots based on local urban built environments / Kun Qin in Transactions in GIS, Vol 24 n° 5 (October 2020)PermalinkCrater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis / David Solarna in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkCSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkMultiscale supervised kernel dictionary learning for SAR target recognition / Lei Tao in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkA novel deep learning instance segmentation model for automated marine oil spill detection / Shamsudeen Temitope Yekeen in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkA novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)PermalinkRecognition of building group patterns using graph convolutional network / Rong Zhao in Cartography and Geographic Information Science, Vol 47 n° 5 (September 2020)PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkX-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data / Danfeng Hong in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkCNN semantic segmentation to retrieve past land cover out of historical orthoimages and DSM: first experiments / Arnaud Le Bris in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkLanduse and land cover identification and disaggregating socio-economic data with convolutional neural network / Jingtao Yao in Geocarto international, vol 35 n° 10 ([01/08/2020])PermalinkClassification of hyperspectral and LiDAR data using coupled CNNs / Renlong Hang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkClassification of sea ice types in Sentinel-1 SAR data using convolutional neural networks / Hugo Boulze in Remote sensing, vol 12 n° 13 (July-1 2020)PermalinkEvaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches / S.M. Hamylton in International journal of applied Earth observation and geoinformation, vol 89 (July 2020)PermalinkSimulating urban land use change by integrating a convolutional neural network with vector-based cellular automata / Yaqian Zhai in International journal of geographical information science IJGIS, vol 34 n° 7 (July 2020)PermalinkCounting of grapevine berries in images via semantic segmentation using convolutional neural networks / Laura Zabawa in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)PermalinkFine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data / Shivangi Srivastava in International journal of geographical information science IJGIS, vol 34 n° 6 (June 2020)PermalinkGeoNat v1.0: A dataset for natural feature mapping with artificial intelligence and supervised learning / Samantha T. Arundel in Transactions in GIS, Vol 24 n° 3 (June 2020)PermalinkA hybrid deep learning–based model for automatic car extraction from high-resolution airborne imagery / Mehdi Khoshboresh Masouleh in Applied geomatics, vol 12 n° 2 (June 2020)PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)PermalinkA convolutional neural network with mapping layers for hyperspectral image classification / Rui Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)PermalinkDeep learning for enrichment of vector spatial databases: Application to highway interchange / Guillaume Touya in ACM Transactions on spatial algorithms and systems, TOSAS, vol 6 n° 3 (May 2020)PermalinkExploring the potential of deep learning segmentation for mountain roads generalisation / Azelle Courtial in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)PermalinkRegion level SAR image classification using deep features and spatial constraints / Anjun Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 163 (May 2020)PermalinkA review of techniques for 3D reconstruction of indoor environments / Zhizhong Kang in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)PermalinkSaliency-guided single shot multibox detector for target detection in SAR images / Lan Du in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)PermalinkAutomated terrain feature identification from remote sensing imagery: a deep learning approach / Wenwen Li in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkDirectionally constrained fully convolutional neural network for airborne LiDAR point cloud classification / Congcong Wen in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkGeocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkMultichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization / Puhong Duan in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkA Single Model CNN for Hyperspectral Image Denoising / Alessandro Maffei in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkStreet-Frontage-Net: urban image classification using deep convolutional neural networks / Stephen Law in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkUsing multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds / Zhou Guo in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)PermalinkWhat, where, and how to transfer in SAR target recognition based on deep CNNs / Zhongling Huang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkClassification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network / Yueguan Yan in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)PermalinkDeep learning for geometric and semantic tasks in photogrammetry and remote sensing / Christian Helpke in Geo-spatial Information Science, vol 23 n° 1 (March 2020)PermalinkDeep SAR-Net: learning objects from signals / Zhongling Huang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)PermalinkEdge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery / Xiaoyan Lu in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)PermalinkPoststack seismic data denoising based on 3-D convolutional neural network / Dawei Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 3 (March 2020)Permalink