Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (563)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
[article]
Titre : Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery Type de document : Article/Communication Auteurs : Ruchan Dong, Auteur ; Dazhuan Xu, Auteur ; Jin Zhao, Auteur ; Licheng Jiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 8534 - 8545 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] régression
[Termes IGN] zone d'intérêtRésumé : (auteur) Small target detection is a challenging task in veryhigh-resolution (VHR) optical remote sensing imagery, because small targets occupy a minuscule number of pixels and are easily disturbed by backgrounds or occluded by others. Although current convolutional neural network (CNN)-based approaches perform well when detecting normal objects, they are barely suitable for detecting small ones. Two practical problems stand in their way. First, current CNN-based approaches are not specifically designed for the minuscule size of small targets (~15 or ~10 pixels in extent). Second, no well-established data sets include labeled small targets and establishing one from scratch is labor-intensive and time-consuming. To address these two issues, we propose an approach that combines Sig-NMS-based Faster R-CNN with transfer learning. Sig-NMS replaces traditional non-maximum suppression (NMS) in the stage of region proposal network and decreases the possibility of missing small targets. Transfer learning can effectively label remote sensing images by automatically annotating both object classes and object locations. We conduct an experiment on three data sets of VHR optical remote sensing images, RSOD, LEVIR, and NWPU VHR-10, to validate our approach. The results demonstrate that the proposed approach can effectively detect small targets in the VHR optical remote sensing images of about 10 × 10 pixels and automatically label small targets as well. In addition, our method presents better mean average precisions than other state-of-the-art methods: 1.5% higher when performing on the RSOD data set, 17.8% higher on the LEVIR data set, and 3.8% higher on NWPU VHR-10. Numéro de notice : A2019-595 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2921396 Date de publication en ligne : 15/07/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2921396 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94587
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 11 (November 2019) . - pp 8534 - 8545[article]Accurate detection of built-up areas from high-resolution remote sensing imagery using a fully convolutional network / Yihua Tan in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 10 (October 2019)
[article]
Titre : Accurate detection of built-up areas from high-resolution remote sensing imagery using a fully convolutional network Type de document : Article/Communication Auteurs : Yihua Tan, Auteur ; Shengzhou Xiong, Auteur ; Zhi Li, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 737 - 752 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du bâti
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image Worldview
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) The analysis of built-up areas has always been a popular research topic for remote sensing applications. However, automatic extraction of built-up areas from a wide range of regions remains challenging. In this article, a fully convolutional network (FCN)–based strategy is proposed to address built-up area extraction. The proposed algorithm can be divided into two main steps. First, divide the remote sensing image into blocks and extract their deep features by a lightweight multi-branch convolutional neural network (LMB-CNN). Second, rearrange the deep features into feature maps that are fed into a well-designed FCN for image segmentation. Our FCN is integrated with multi-branch blocks and outputs multi-channel segmentation masks that are utilized to balance the false alarm and missing alarm. Experiments demonstrate that the overall classification accuracy of the proposed algorithm can achieve 98.75% in the test data set and that it has a faster processing compared with the existing state-of-the-art algorithms. Numéro de notice : A2019-522 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.10.737 Date de publication en ligne : 01/10/2019 En ligne : https://doi.org/10.14358/PERS.85.10.737 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93992
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 10 (October 2019) . - pp 737 - 752[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019101 SL Revue Centre de documentation Revues en salle Disponible A CNN-based subpixel level DSM generation approach via single image super-resolution / Yongjun Zhang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 10 (October 2019)
[article]
Titre : A CNN-based subpixel level DSM generation approach via single image super-resolution Type de document : Article/Communication Auteurs : Yongjun Zhang, Auteur ; Zhi Zheng, Auteur ; Yimin Luo, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 765 - 775 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de données
[Termes IGN] appariement d'images
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] fusion de données multisource
[Termes IGN] limite de résolution radiométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] précision infrapixellaire
[Termes IGN] reconstruction d'imageRésumé : (Auteur) Previous work for subpixel level Digital Surface Model (DSM) generation mainly focused on data fusion techniques, which are extremely limited by the difficulty of multisource data acquisition. Although several DSM super resolution (SR) methods have been developed to ease the problem, a new issue that plenty of DSM samples are needed to train the model is raised. Therefore, considering the original images have vital influence on its DSM's accuracy, we address the problem by directly improving images resolution. Several SR models are refined and brought into the traditional DSM generation process as an image quality improvement stage to construct an easy but effective workflow for subpixel level DSM generation. Experiments verified the validity and significance of bringing SR technology into this kind of application. Statistical analysis also confirmed that a subpixel level DSM with higher fidelity can be obtained more easily compared to directly DSM interpolation. Numéro de notice : A2019-524 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.10.765 Date de publication en ligne : 01/10/2019 En ligne : https://doi.org/10.14358/PERS.85.10.765 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93997
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 10 (October 2019) . - pp 765 - 775[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019101 SL Revue Centre de documentation Revues en salle Disponible Mapping dead forest cover using a deep convolutional neural network and digital aerial photography / Jean-Daniel Sylvain in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)
[article]
Titre : Mapping dead forest cover using a deep convolutional neural network and digital aerial photography Type de document : Article/Communication Auteurs : Jean-Daniel Sylvain, Auteur ; Guillaume Drolet, Auteur ; Nicolas Brown, Auteur Année de publication : 2019 Article en page(s) : pp 14 - 26 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre mort
[Termes IGN] base de données forestières
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] feuillu
[Termes IGN] forêt boréale
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] peuplement mélangé
[Termes IGN] Pinophyta
[Termes IGN] Québec (Canada)
[Termes IGN] santé des forêtsRésumé : (Auteur) Tree mortality is an important forest ecosystem variable having uses in many applications such as forest health assessment, modelling stand dynamics and productivity, or planning wood harvesting operations. Because tree mortality is a spatially and temporally erratic process, rates and spatial patterns of tree mortality are difficult to estimate with traditional inventory methods. Remote sensing imagery has the potential to detect tree mortality at spatial scales required for accurately characterizing this process (e.g., landscape, region). Many efforts have been made in this sense, mostly using pixel- or object-based methods. In this study, we explored the potential of deep Convolutional Neural Networks (CNNs) to detect and map tree health status and functional type over entire regions. To do this, we built a database of around 290,000 photo-interpreted trees that served to extract and label image windows from 20 cm-resolution digital aerial images, for use in CNN training and evaluation. In this process, we also evaluated the effect of window size and spectral channel selection on classification accuracy, and we assessed if multiple realizations of a CNN, generated using different weight initializations, can be aggregated to provide more robust predictions. Finally, we extended our model with 5 additional classes to account for the diversity of landcovers found in our study area. When predicting tree health status only (live or dead), we obtained test accuracies of up to 94%, and up to 86% when predicting functional type only (broadleaf or needleleaf). Channel selection had a limited impact on overall classification accuracy, while window size increased the ability of the CNNs to predict plant functional type. The aggregation of multiple realizations of a CNN allowed us to avoid the selection of suboptimal models and help to remove much of the speckle effect when predicting on new aerial images. Test accuracies of plant functional type and health status were not affected in the extended model and were all above 95% for the 5 extra classes. Our results demonstrate the robustness of the CNN for between-scene variations in aerial photography and also suggest that this approach can be applied at operational level to map tree mortality across extensive territories. Numéro de notice : A2019-316 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.07.010 Date de publication en ligne : 02/08/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.07.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93353
in ISPRS Journal of photogrammetry and remote sensing > vol 156 (October 2019) . - pp 14 - 26[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019103 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019102 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Multi-sensor prediction of Eucalyptus stand volume: A support vector approach / Guilherme Silverio Aquino de Souza in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)
[article]
Titre : Multi-sensor prediction of Eucalyptus stand volume: A support vector approach Type de document : Article/Communication Auteurs : Guilherme Silverio Aquino de Souza, Auteur ; Vicente Paulo Soares, Auteur ; Helio Garcia Leite, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 135 - 146 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] apprentissage automatique
[Termes IGN] bande L
[Termes IGN] Brésil
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal
[Termes IGN] Eucalyptus (genre)
[Termes IGN] image ALOS-AVNIR2
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image radar moirée
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] régression multiple
[Termes IGN] taux d'échantillonnage
[Termes IGN] volume en boisRésumé : (Auteur) Stem volume is a key attribute of Eucalyptus forest plantations upon which decision-making is based at diverse levels of planning. Quantifying volume through remote sensing can support a proper management of forests. Because of limitations on spaceborne optical and synthetic aperture radar sensors, this study integrated both types of datasets assembled using support vector regression (SVR) to retrieve the stand volume of Eucalyptus plantations. We assessed different combinations of sensors and a minimum number of plots to develop an SVR model. Finally, the best SVR performance was compared with other analytical methods already tested and in the literature: multilinear regression, artificial neural networks (ANN), and random forest (RF). Here, we introduce a test for comparative analysis of the performance of different methods. We found that SVR accurately predicted stem volume of Brazilian fast-growing Eucalyptus forest plantations. Gaussian radial basis was the most suitable kernel function. Integrating the optical and L-band backscatter data increased the predictive accuracy compared to a single sensor model. Combining NIR-band data from ALOS AVNIR-2 and backscatter of L-band horizontal emitted and vertical received (HV) electric fields from ALOS PALSAR produced the most accurate SVR model (with an R2 of 0.926 and root mean square error of 11.007 m3/ha). The number of field plots sufficient for model development with non-redundant explanatory variables was 77. Under this condition, SVR performed similarly to ANN and outperformed the multiple linear regression and random forest methods. Numéro de notice : A2019-319 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : doi.org/10.1016/j.isprsjprs.2019.08.002 Date de publication en ligne : 20/08/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.08.002 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93357
in ISPRS Journal of photogrammetry and remote sensing > vol 156 (October 2019) . - pp 135 - 146[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019103 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019102 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Saliency-guided deep neural networks for SAR image change detection / Jie Geng in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)PermalinkSimulation of urban expansion via integrating artificial neural network with Markov chain – cellular automata / Tingting Xu in International journal of geographical information science IJGIS, vol 33 n° 10 (October 2019)PermalinkLearning and adapting robust features for satellite image segmentation on heterogeneous data sets / Sina Ghassemi in IEEE Transactions on geoscience and remote sensing, vol 57 n° 9 (September 2019)PermalinkPPD: Pyramid Patch Descriptor via convolutional neural network / Jie Wan in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)PermalinkImproving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours / David Griffiths in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkAutomatisation du traitement de données "mobile mapping" : extraction d'éléments linéaires et ponctuels / Loïc Elsholz in XYZ, n° 159 (juin 2019)PermalinkExploitation of deep learning in the automatic detection of cracks on paved roads / Won Mo Jung in Geomatica, vol 73 n° 2 (June 2019)PermalinkSemantic façade segmentation from airborne oblique images / Yaping Lin in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkConditional random field and deep feature learning for hyperspectral image classification / Fahim Irfan Alam in IEEE Transactions on geoscience and remote sensing, vol 57 n° 3 (March 2019)PermalinkHyperspectral image classification with squeeze multibias network / Leyuan Fang in IEEE Transactions on geoscience and remote sensing, vol 57 n° 3 (March 2019)PermalinkComplete 3D scene parsing from an RGBD image / Chuhang Zou in International journal of computer vision, vol 127 n° 2 (February 2019)PermalinkLearning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery / Lichao Mou in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)PermalinkAdvanced Remote Sensing Technology for Synthetic Aperture Radar Applications, Tsunami Disasters, and Infrastructure / Maged Marghany (2019)PermalinkAnalyse d’images par méthode de Deep Learning appliquée au contexte routier en conditions météorologiques dégradées / Khouloud Dahmane (2019)PermalinkClassification du type et de la concentration de la banquise, à partir d’images Sentinel-1 SAR, grâce à des réseaux de neurones convolutifs / Hugo Boulze (2019)PermalinkCorrecting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkDétection et localisation d'objets 3D par apprentissage profond en topologie capteur / Pierre Biasutti (2019)PermalinkEnhancing the predictability of least-squares collocation through the integration with least-squares-support vector machine / Hossam Talaat Elshambaky in Journal of applied geodesy, vol 13 n° 1 (January 2019)PermalinkPermalinkEstimation de profondeur à partir d'images monoculaires par apprentissage profond / Michel Moukari (2019)Permalink