Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par séparateurs à vaste marge
classification par séparateurs à vaste margeSynonyme(s)classification SVMVoir aussi |
Documents disponibles dans cette catégorie (143)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)
[article]
Titre : Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification Type de document : Article/Communication Auteurs : Tao Liu, Auteur ; Amr Abd-Elrahman, Auteur Année de publication : 2018 Article en page(s) : pp 154 - 170 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] drone
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] zone humideRésumé : (Auteur) Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework. Numéro de notice : A2018-114 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.03.006 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.03.006 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89550
in ISPRS Journal of photogrammetry and remote sensing > vol 139 (May 2018) . - pp 154 - 170[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2018051 RAB Revue Centre de documentation En réserve L003 Disponible Image classification-based ground filtering of point clouds extracted from UAV-based aerial photos / Volkan Yilmaz in Geocarto international, vol 33 n° 3 (March 2018)
[article]
Titre : Image classification-based ground filtering of point clouds extracted from UAV-based aerial photos Type de document : Article/Communication Auteurs : Volkan Yilmaz, Auteur ; Berkant Konakoglu, Auteur ; Cigdem Serifoglu, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 310 - 320 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de filtrage
[Termes IGN] classification dirigée
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données localisées 3D
[Termes IGN] drone
[Termes IGN] image aérienne
[Termes IGN] modèle numérique de surface
[Termes IGN] semis de points
[Termes IGN] TurquieRésumé : (Auteur) With the advent of unmanned aerial vehicles (UAVs) for mapping applications, it is possible to generate 3D dense point clouds using stereo images. This technology, however, has some disadvantages when compared to Light Detection and Ranging (LiDAR) system. Unlike LiDAR, digital cameras mounted on UAVs are incapable of viewing beneath the canopy, which leads to sparse points on the bare earth surface. In such cases, it is more challenging to remove points belonging to above-ground objects using ground filtering algorithms generated especially for LiDAR data. To tackle this problem, a methodology employing supervised image classification for filtering 3D point clouds is proposed in this study. A classified image is overlapped with the point cloud to determine the ground points to be used for digital elevation model (DEM) generation. Quantitative evaluation results showed that filtering the point cloud with this methodology has a good potential for high-resolution DEM generation. Numéro de notice : A2018-035 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2016.1250825 En ligne : https://doi.org/10.1080/10106049.2016.1250825 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89213
in Geocarto international > vol 33 n° 3 (March 2018) . - pp 310 - 320[article]Mapping tree cover with Sentinel-2 data using the Support Vector Machine (SVM) / Anna Mirończuk in Geoinformation issues, Vol 9 n° 1 (2017)
[article]
Titre : Mapping tree cover with Sentinel-2 data using the Support Vector Machine (SVM) Type de document : Article/Communication Auteurs : Anna Mirończuk, Auteur ; Agata Hościło, Auteur Année de publication : 2018 Article en page(s) : pp 27 - 38 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] carte de la végétation
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] couvert forestier
[Termes IGN] image multitemporelle
[Termes IGN] image Sentinel-MSI
[Termes IGN] parc naturel
[Termes IGN] PologneRésumé : (auteur) The knowledge on forest resources is important for sustainable forest management at local and national level. The aim of this paper is to examine the efficacy of the Support Vector Machine (SVM) approach for tree cover mapping based on Sentinel-2 images and to explore the potential of the Sentinel-2 data for the assessment of tree cover. Sentinel-2 is a constellation of two European satellites providing innovative wide-swath (up to 290 km), high-resolution and multispectral data (13 spectral bands at 10, 20 and 60 m spatial resolution).The study area is located in the Forest Promotion Complex, which is a part of the Knyszyn Forest Landscape Park in Poland. The SVM classification was performed on the single images (spring and summer season) and on multi-date Sentinel-2 images (images from two dates classified simultaneously). In addition, the use of high-resolution bands and a combination of the 10 m and 20 m spatial resolution data was examined. The overall accuracy for all performed classification was very high and reached the level of 96.7%–99.6%, which con-firms that SVM classification can be successfully applied for tree cover mapping. The analysis showed that the Sentinel-2 images acquired in the middle of the vegetation season, when the leaves are fully developed are more suitable for tree cover mapping than the images acquired in spring. Numéro de notice : A2018-629 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : sans Date de publication en ligne : 01/03/2018 En ligne : http://www.igik.edu.pl/upload/File/wydawnictwa/GI9MiroczukA.pdf Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92885
in Geoinformation issues > Vol 9 n° 1 (2017) . - pp 27 - 38[article]Multisource remote sensing data classification based on convolutional neural network / Xiaodong Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
[article]
Titre : Multisource remote sensing data classification based on convolutional neural network Type de document : Article/Communication Auteurs : Xiaodong Xu, Auteur ; Wei Li, Auteur ; Qiong Ran, Auteur ; Qian Du, Auteur ; Lianru Gao, Auteur ; Bing Zhang, Auteur Année de publication : 2018 Article en page(s) : pp 937 - 949 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) As a list of remotely sensed data sources is available, how to efficiently exploit useful information from multisource data for better Earth observation becomes an interesting but challenging problem. In this paper, the classification fusion of hyperspectral imagery (HSI) and data from other multiple sensors, such as light detection and ranging (LiDAR) data, is investigated with the state-of-the-art deep learning, named the two-branch convolution neural network (CNN). More specific, a two-tunnel CNN framework is first developed to extract spectral-spatial features from HSI; besides, the CNN with cascade block is designed for feature extraction from LiDAR or high-resolution visual image. In the feature fusion stage, the spatial and spectral features of HSI are first integrated in a dual-tunnel branch, and then combined with other data features extracted from a cascade network. Experimental results based on several multisource data demonstrate the proposed two-branch CNN that can achieve more excellent classification performance than some existing methods. Numéro de notice : A2018-191 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2756851 Date de publication en ligne : 16/10/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2756851 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89856
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 2 (February 2018) . - pp 937 - 949[article]Active learning-based optimized training library generation for object-oriented image classification / Rajeswari Balasubramaniam in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)
[article]
Titre : Active learning-based optimized training library generation for object-oriented image classification Type de document : Article/Communication Auteurs : Rajeswari Balasubramaniam, Auteur ; Srivalsan Namboodiri, Auteur ; Rama Rao Nidamanuri, Auteur ; Rama Krishna Sai Subrahmanyam Gorthi, Auteur Année de publication : 2018 Article en page(s) : pp 575 - 585 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage dirigé
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] image aérienne
[Termes IGN] image multibandeRésumé : (Auteur) In this paper, we introduce an active learning (AL)-based object training library generation for a multiclassifier object-oriented image analysis (OOIA) system. While several AL approaches do exist for pixel-based training library generation and for hyperspectral image classification, there is no standard training library generation strategy for OOIA of very high spatial resolution images. Given a sufficient number of training samples, supervised classification is the method of choice for image classification. However, this strategy becomes computationally expensive with the increase in the number of classes or the number of images to be classified. The above-mentioned issue is solved in this proposed method, where an optimized training library of objects (superpixels) is generated based on a batch mode AL approach. A softmax classifier is used as a detector in this method, which helps in determining the right samples to be chosen for library updation. To this end, we construct a multiclassifier system with max-voting decision to classify an image at pixel level. This algorithm was applied on three different very high-resolution airborne data sets, each with varying complexity in terms of variations in geographical context, sensors, illumination, and view angles. Our method has empirically outperformed the traditional OOIA by producing equivalent accuracy with a training library that is orders of magnitude smaller. In addition, the most distinctive ability of the algorithm is experienced in the most heterogeneous data set, where its performance in terms of accuracy is around twice the performance of the traditional method in the same situation. The generality of this classification strategy is proved through its performance on multispectral images and for cross-domain application. Finally, the robustness of this method is identified by comparing its performance with an alternative AL approach-self-learning-based semisupervised SVM. The capability of the proposed method to handle highly heterogeneous data is identified as the primary reason for its robustness. Numéro de notice : A2018-188 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2751568 Date de publication en ligne : 29/09/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2751568 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89847
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 1 (January 2018) . - pp 575 - 585[article]Adapting an existing semi-automatized image processing chain to enable Sentinel-2 data classification. / Hiyam Elbadri (2018)PermalinkDecision fusion of SPOT6 and multitemporal Sentinel2 images for urban area detection / Cyril Wendl (2018)PermalinkExploring image fusion of ALOS/PALSAR data and LANDSAT data to differentiate forest area / Saygin Abdikan in Geocarto international, vol 33 n° 1 (January 2018)PermalinkExploring the impact of seasonality on urban land-cover mapping using multi-season sentinel-1A and GF-1 WFV images in a subtropical monsoon-climate region / Tao Zhou in ISPRS International journal of geo-information, vol 7 n° 1 (January 2018)PermalinkRéseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Jean Ogier du Terrail (2018)PermalinkSpatio-temporal grid mining applied to image classification and cellular automata analysis / Romain Deville (2018)PermalinkPermalinkLearning aggregated features and optimizing model for semantic labeling / Jianhua Wang in The Visual Computer, vol 33 n° 12 (December 2017)PermalinkMultimorphological superpixel model for hyperspectral image classification / Tianzhu Liu in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)PermalinkRemote sensing scene classification by unsupervised representation learning / Xiaoqiang Lu in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)Permalink