Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > échantillonnage d'image
échantillonnage d'image |
Documents disponibles dans cette catégorie (50)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification / Ming Cong in Geocarto international, vol 36 n° 18 ([01/10/2021])
[article]
Titre : Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification Type de document : Article/Communication Auteurs : Ming Cong, Auteur ; Zhiye Wang, Auteur ; Yiting Tao, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 2065 - 2084 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] chromatopsie
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] échantillonnage d'image
[Termes IGN] filtrage numérique d'image
[Termes IGN] image captée par drone
[Termes IGN] vision
[Termes IGN] vision par ordinateurRésumé : (auteur) Unmanned aerial vehicle remote sensing images need to be precisely and efficiently classified. However, complex ground scenes produced by ultra-high ground resolution, data uniqueness caused by multi-perspective observations, and need for manual labelling make it difficult for current popular deep learning networks to obtain reliable references from heterogeneous samples. To address these problems, this paper proposes an optic nerve microsaccade (ONMS) classification network, developed based on multiple dilated convolution. ONMS first applies a Laplacian of Gaussian filter to find typical features of ground objects and establishes class labels using adaptive clustering. Then, using an image pyramid, multi-scale image data are mapped to the class labels adaptively to generate homologous reliable samples. Finally, an end-to-end multi-scale neural network is applied for classification. Experimental results show that ONMS significantly reduces sample labelling costs while retaining high cognitive performance, classification accuracy, and noise resistance—indicating that it has significant application advantages. Numéro de notice : A2021-707 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2019.1687593 Date de publication en ligne : 07/11/2019 En ligne : https://doi.org/10.1080/10106049.2019.1687593 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98602
in Geocarto international > vol 36 n° 18 [01/10/2021] . - pp 2065 - 2084[article]Unsupervised multi-level feature extraction for improvement of hyperspectral classification / Qiaoqiao Sun in Remote sensing, vol 13 n° 8 (April-2 2021)
[article]
Titre : Unsupervised multi-level feature extraction for improvement of hyperspectral classification Type de document : Article/Communication Auteurs : Qiaoqiao Sun, Auteur ; Xuefeng Liu, Auteur ; Salah Bourennane, Auteur Année de publication : 2021 Article en page(s) : n° 1602 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] codage
[Termes IGN] convolution (signal)
[Termes IGN] déconvolution
[Termes IGN] échantillonnage d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] observation multiniveauxRésumé : (auteur) Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features. Numéro de notice : A2021-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs13081602 Date de publication en ligne : 20/04/2021 En ligne : https://doi.org/10.3390/rs13081602 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97628
in Remote sensing > vol 13 n° 8 (April-2 2021) . - n° 1602[article]Deep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)
[article]
Titre : Deep traffic light detection by overlaying synthetic context on arbitrary natural images Type de document : Article/Communication Auteurs : Jean Pablo Vieira de Mello, Auteur ; Lucas Tabelini, Auteur ; Rodrigo F. Berriel, Auteur Année de publication : 2021 Article en page(s) : pp 76 - 86 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] échantillonnage d'image
[Termes IGN] feu de circulation
[Termes IGN] image à haute résolution
[Termes IGN] navigation autonome
[Termes IGN] signalisation routière
[Termes IGN] trafic routierRésumé : (auteur) Deep neural networks come as an effective solution to many problems associated with autonomous driving. By providing real image samples with traffic context to the network, the model learns to detect and classify elements of interest, such as pedestrians, traffic signs, and traffic lights. However, acquiring and annotating real data can be extremely costly in terms of time and effort. In this context, we propose a method to generate artificial traffic-related training data for deep traffic light detectors. This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds that are not related to the traffic domain. Thus, a large amount of training data can be generated without annotation efforts. Furthermore, it also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state. Experiments show that it is possible to achieve results comparable to those obtained with real training data from the problem domain, yielding an average mAP and an average F1-score which are each nearly 4 p.p. higher than the respective metrics obtained with a real-world reference model. Numéro de notice : A2021-151 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.cag.2020.09.012 Date de publication en ligne : 09/10/2020 En ligne : https://doi.org/10.1016/j.cag.2020.09.012 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97027
in Computers and graphics > vol 94 n° 1 (February 2021) . - pp 76 - 86[article]Heuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data / Xiuyuan Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
[article]
Titre : Heuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data Type de document : Article/Communication Auteurs : Xiuyuan Zhang, Auteur ; Shihong Du, Auteur ; Zhijia Zheng, Auteur Année de publication : 2020 Article en page(s) : pp 1 - 12 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] cartographie urbaine
[Termes IGN] Chine
[Termes IGN] échantillonnage d'image
[Termes IGN] image à très haute résolution
[Termes IGN] méthode heuristique
[Termes IGN] point d'intérêt
[Termes IGN] scène urbaineRésumé : (Auteur) Urban functional zones are basic units of urban planning and resource allocation, and contribute to a wide range of urban studies and investigations. Existing studies on functional-zone mapping with very-high-resolution (VHR) satellite images focused much on feature representations and classification techniques, but ignored zone sampling which however was fundamental to automatic zone classifications. Functional-zone sampling is much complicated and can hardly be resolved by classical sampling methods, as functional zones are complex urban scenes which consist of heterogeneous land covers and have highly abstract categories. To resolve the issue, this study presents a novel sampling paradigm, i.e., heuristic sample learning (HSL). It first proposes a sparse topic model to select representative functional zones, then uses deep forest to select confusing zones, and finally embraces Chinese restaurant process to label these selected zones. The presented method collects both representative and confusing zone samples and identifies their categories accurately, which makes the functional-zone classification process robust and the classification results accurate. Experiments conducted in Beijing indicate that HSL is effective and efficient for functional-zone sampling and classifications. Compared to traditional manual sampling, HSL reduces the time cost by 55% and improves the classification accuracy by 11.3% on average; furthermore, HSL can reduce the variation in sampling and classification results caused by different proficiency of operators. Accordingly, HSL significantly contributes to functional-zone mapping and plays an important role in urban studies. Numéro de notice : A2020-061 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.01.005 Date de publication en ligne : 13/01/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.01.005 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94577
in ISPRS Journal of photogrammetry and remote sensing > vol 161 (March 2020) . - pp 1 - 12[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Saliency-guided deep neural networks for SAR image change detection / Jie Geng in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
[article]
Titre : Saliency-guided deep neural networks for SAR image change detection Type de document : Article/Communication Auteurs : Jie Geng, Auteur ; Xiaorui Ma, Auteur ; Xiaojun Zhou, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 7365 - 7377 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification floue
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection de changement
[Termes IGN] échantillonnage d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtre de déchatoiement
[Termes IGN] image radar moirée
[Termes IGN] logique floue
[Termes IGN] occupation du sol
[Termes IGN] saillance
[Termes IGN] télédétection en hyperfréquenceMots-clés libres : hierarchical fuzzy C-means clustering (HFCM) Résumé : (auteur) Change detection is an important task to identify land-cover changes between the acquisitions at different times. For synthetic aperture radar (SAR) images, inherent speckle noise of the images can lead to false changed points, which affects the change detection performance. Besides, the supervised classifier in change detection framework requires numerous training samples, which are generally obtained by manual labeling. In this paper, a novel unsupervised method named saliency-guided deep neural networks (SGDNNs) is proposed for SAR image change detection. In the proposed method, to weaken the influence of speckle noise, a salient region that probably belongs to the changed object is extracted from the difference image. To obtain pseudotraining samples automatically, hierarchical fuzzy C-means (HFCM) clustering is developed to select samples with higher probabilities to be changed and unchanged. Moreover, to enhance the discrimination of sample features, DNNs based on the nonnegative- and Fisher-constrained autoencoder are applied for final detection. Experimental results on five real SAR data sets demonstrate the effectiveness of the proposed approach. Numéro de notice : A2019-536 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2913095 Date de publication en ligne : 19/05/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2913095 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94154
in IEEE Transactions on geoscience and remote sensing > Vol 57 n° 10 (October 2019) . - pp 7365 - 7377[article]PPD: Pyramid Patch Descriptor via convolutional neural network / Jie Wan in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)PermalinkComprehensive evaluation of soil moisture retrieval models under different crop cover types using C-band synthetic aperture radar data / P. Kumar in Geocarto international, vol 34 n° 9 ([15/06/2019])PermalinkA deep learning approach to DTM extraction from imagery using rule-based training labels / Caroline M. Gevaert in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkDigital aerial photogrammetry for assessing cumulative spruce budworm defoliation and enhancing forest inventories at a landscape-level / Tristan R.H. Goodbody in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkCombined calibration method based on rational function model for the Chinese GF-1 wide-field-of-view imagery / Taoyang Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 82 n° 4 (April 2016)PermalinkSampling piecewise convex unmixing and endmember extraction / Alina Zare in IEEE Transactions on geoscience and remote sensing, vol 51 n° 3 Tome 2 (March 2013)PermalinkA framework for supervised image classification with incomplete training samples / Q. Guo in Photogrammetric Engineering & Remote Sensing, PERS, vol 78 n° 6 (June 2012)PermalinkPersistent scatterer interferometry : potential, limits and initial C- and X-band comparison / M. Crosetto in Photogrammetric Engineering & Remote Sensing, PERS, vol 76 n° 9 (September 2010)PermalinkQuantifiying the building stock optical high-resolution satellite imagery for assessing disaster risk / D. Ehrlich in Geocarto international, vol 25 n° 4 (July 2010)PermalinkSampling approaches for one-pass land-use/land-cover change mapping / Zhi Huang in International Journal of Remote Sensing IJRS, vol 31 n° 6 (March 2010)Permalink