Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse texturale > texture d'image
texture d'imageVoir aussi |
Documents disponibles dans cette catégorie (212)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery / Lin Chen in Forests, vol 9 n° 10 (October 2018)
[article]
Titre : Estimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Chunying Ren, Auteur ; Bai Zhang, Auteur ; Zongming Wang, Auteur ; Yanbiao Xi, Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] arbre caducifolié
[Termes IGN] biomasse aérienne
[Termes IGN] Chine
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] modèle de simulation
[Termes IGN] montagne
[Termes IGN] régression géographiquement pondérée
[Termes IGN] surveillance forestière
[Termes IGN] texture d'image
[Termes IGN] variable biophysique (végétation)Résumé : (Auteur) Accurate forest above-ground biomass (AGB) is crucial for sustaining forest management and mitigating climate change to support REDD+ (reducing emissions from deforestation and forest degradation, plus the sustainable management of forests, and the conservation and enhancement of forest carbon stocks) processes. Recently launched Sentinel imagery offers a new opportunity for forest AGB mapping and monitoring. In this study, texture characteristics and backscatter coefficients of Sentinel-1, in addition to multispectral bands, vegetation indices, and biophysical variables of Sentinal-2, based on 56 measured AGB samples in the center of the Changbai Mountains, China, were used to develop biomass prediction models through geographically weighted regression (GWR) and machine learning (ML) algorithms, such as the artificial neural network (ANN), support vector machine for regression (SVR), and random forest (RF). The results showed that texture characteristics and vegetation biophysical variables were the most important predictors. SVR was the best method for predicting and mapping the patterns of AGB in the study site with limited samples, whose mean error, mean absolute error, root mean square error, and correlation coefficient were 4 × 10−3, 0.07, 0.08 Mg·ha−1, and 1, respectively. Predicted values of AGB from four models ranged from 11.80 to 324.12 Mg·ha−1, and those for broadleaved deciduous forests were the most accurate, while those for AGB above 160 Mg·ha−1 were the least accurate. The study demonstrated encouraging results in forest AGB mapping of the normal vegetated area using the freely accessible and high-resolution Sentinel imagery, based on ML techniques. Numéro de notice : A2018-478 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/f9100582 Date de publication en ligne : 20/09/2018 En ligne : https://doi.org/10.3390/f9100582 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91180
in Forests > vol 9 n° 10 (October 2018)[article]Object-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier / Huanxue Zhang in Geocarto international, vol 33 n° 10 (October 2018)
[article]
Titre : Object-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier Type de document : Article/Communication Auteurs : Huanxue Zhang, Auteur ; Qiangzi Li, Auteur ; Jiangui Liu, Auteur ; Taifeng Dong, Auteur ; Heather McNairn, Auteur Année de publication : 2018 Article en page(s) : pp 1017 - 1035 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] bande spectrale
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] corrélation par régions de niveaux de gris
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image SPOT 5
[Termes IGN] indice de végétation
[Termes IGN] limite de terrain
[Termes IGN] Ontario (Canada)
[Termes IGN] réflectance spectrale
[Termes IGN] segmentation d'image
[Termes IGN] surface cultivée
[Termes IGN] surveillance agricole
[Termes IGN] texture d'image
[Termes IGN] variogrammeRésumé : (auteur) In this study, an object-based image analysis (OBIA) approach was developed to classify field crops using multi-temporal SPOT-5 images with a random forest (RF) classifier. A wide range of features, including the spectral reflectance, vegetation indices (VIs), textural features based on the grey-level co-occurrence matrix (GLCM) and textural features based on geostatistical semivariogram (GST) were extracted for classification, and their performance was evaluated with the RF variable importance measures. Results showed that the best segmentation quality was achieved using the SPOT image acquired in September, with a scale parameter of 40. The spectral reflectance and the GST had a stronger contribution to crop classification than the VIs and GLCM textures. A subset of 60 features was selected using the RF-based feature selection (FS) method, and in this subset, the near-infrared reflectance and the image acquired in August (jointing and heading stages) were found to be the best for crop classification. Numéro de notice : A2019-049 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2017.1333533 Date de publication en ligne : 23/06/2017 En ligne : https://doi.org/10.1080/10106049.2017.1333533 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92063
in Geocarto international > vol 33 n° 10 (October 2018) . - pp 1017 - 1035[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 059-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Robust detection and affine rectification of planar homogeneous texture for scene understanding / Shahzor Ahmad in International journal of computer vision, vol 126 n° 8 (August 2018)
[article]
Titre : Robust detection and affine rectification of planar homogeneous texture for scene understanding Type de document : Article/Communication Auteurs : Shahzor Ahmad, Auteur ; Loong-Fah Cheong, Auteur Année de publication : 2018 Article en page(s) : pp 822 - 854 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compréhension de l'image
[Termes IGN] méthode robuste
[Termes IGN] scène
[Termes IGN] texture d'image
[Termes IGN] transformation affineRésumé : (Auteur) Man-made environments tend to be abundant with planar homogeneous texture, which manifests as regularly repeating scene elements along a plane. In this work, we propose to exploit such structure to facilitate high-level scene understanding. By robustly fitting a texture projection model to optimal dominant frequency estimates in image patches, we arrive at a projective-invariant method to localize such generic, semantically meaningful regions in multi-planar scenes. The recovered projective parameters also allow an affine-ambiguous rectification in real-world images marred with outliers, room clutter, and photometric severities. Comprehensive qualitative and quantitative evaluations are performed that show our method outperforms existing representative work for both rectification and detection. The potential of homogeneous texture for two scene understanding tasks is then explored. Firstly, in environments where vanishing points cannot be reliably detected, or the Manhattan assumption is not satisfied, homogeneous texture detected by the proposed approach is shown to provide alternative cues to obtain a scene geometric layout. Second, low-level feature descriptors extracted upon affine rectification of detected texture are found to be not only class-discriminative but also complementary to features without rectification, improving recognition performance on the 67-category MIT benchmark of indoor scenes. One of our configurations involving deep ConvNet features outperforms most current state-of-the-art work on this dataset, achieving a classification accuracy of 76.90%. The approach is additionally validated on a set of 31 categories (mostly outdoor man-made environments exhibiting regular, repeating structure), being a subset of the large-scale Places2 scene dataset. Numéro de notice : A2018-415 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1078-2 Date de publication en ligne : 22/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1078-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90898
in International journal of computer vision > vol 126 n° 8 (August 2018) . - pp 822 - 854[article]Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans / Mohamed Boussaha in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2 (June 2018)
[article]
Titre : Large scale textured mesh reconstruction from mobile mapping images and LIDAR scans Type de document : Article/Communication Auteurs : Mohamed Boussaha , Auteur ; Bruno Vallet , Auteur ; Patrick Rives, Auteur Année de publication : 2018 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ISPRS 2018, TC II Mid-term Symposium, Towards Photogrammetry 2020 04/06/2018 07/06/2018 Riva del Garda Italie ISPRS OA Annals Article en page(s) : pp 49 - 56 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] chaîne de traitement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] grande échelle
[Termes IGN] maillage
[Termes IGN] orthoimage
[Termes IGN] reconstruction d'objet
[Termes IGN] Rouen
[Termes IGN] semis de points
[Termes IGN] texture d'imageRésumé : (auteur) The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer. Numéro de notice : A2018-329 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-IV-2-49-2018 Date de publication en ligne : 28/05/2018 En ligne : http://dx.doi.org/10.5194/isprs-annals-IV-2-49-2018 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90471
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol IV-2 (June 2018) . - pp 49 - 56[article]Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification / Rama Rao Nidamanuri in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)
[article]
Titre : Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type de document : Article/Communication Auteurs : Rama Rao Nidamanuri, Auteur ; Fahad Shahbaz Khan, Auteur ; Joost van de Weijer, Auteur ; Matthieu Molinier, Auteur ; Jorma Laaksonen, Auteur Année de publication : 2018 Article en page(s) : pp 74 - 85 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse texturale
[Termes IGN] apprentissage profond
[Termes IGN] classification
[Termes IGN] image RVB
[Termes IGN] motif binaire local
[Termes IGN] réseau neuronal convolutif
[Termes IGN] texture d'imageRésumé : (Auteur) Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. Numéro de notice : A2018-121 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.023 Date de publication en ligne : 15/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89590
in ISPRS Journal of photogrammetry and remote sensing > vol 138 (April 2018) . - pp 74 - 85[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018043 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018042 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Multiple cues-based active contours for target contour tracking under sophisticated background / Peng Lv in The Visual Computer, vol 33 n°9 (September 2017)PermalinkMonitoring mangrove biomass change in Vietnam using SPOT images and an object-based approach combined with machine learning algorithms / Lien T.H. Pham in ISPRS Journal of photogrammetry and remote sensing, vol 128 (June 2017)PermalinkCartographic continuum rendering based on color and texture interpolation to enhance photo-realism perception / Charlotte Hoarau in ISPRS Journal of photogrammetry and remote sensing, vol 127 (May 2017)PermalinkAssessment of textural differentiations in forest resources in Romania using fractal analysis / Ion Andronache in Forests, vol 8 n° 3 (March 2017)PermalinkNew point matching algorithm using sparse representation of image patch feature for SAR image registration / Jianwei Fan in IEEE Transactions on geoscience and remote sensing, vol 55 n° 3 (March 2017)PermalinkUrban slum detection using texture and spatial metrics derived from satellite imagery / Divyani Kohli in Journal of spatial science, vol 61 n° 2 (December 2016)PermalinkDistributed texture-based land cover classification algorithm using hidden Markov model for multispectral data / S. Jenicka in Survey review, vol 48 n° 351 (October 2016)PermalinkHabitat change on Horn Island, Mississippi, 1940-2010, determined from textural features in panchromatic vertical aerial imagery / Guy W. Jeter Jr in Geocarto international, Vol 31 n° 9 - 10 (October - November 2016)PermalinkA novel computer-aided tree species identification method based on burst wind segmentation of 3D bark textures / Alice Ahlem Othmani in Machine Vision and Applications, vol 27 n° 5 (July 2016)PermalinkSupervised classification of very high resolution optical images using wavelet-based textural features / Olivier Regniers in IEEE Transactions on geoscience and remote sensing, vol 54 n° 6 (June 2016)Permalink