Descripteur
Documents disponibles dans cette catégorie (1844)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas / Hossein Pourazar in Geocarto international, vol 37 n° 23 ([15/10/2022])
[article]
Titre : A deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas Type de document : Article/Communication Auteurs : Hossein Pourazar, Auteur ; Farhad Samadzadegan, Auteur ; Farzaneh Dadrass Javan, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 6695 - 6712 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] alignement des données
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] image proche infrarouge
[Termes IGN] image RVB
[Termes IGN] modèle numérique de surface
[Termes IGN] orthophotoplan numérique
[Termes IGN] zone urbaineRésumé : (auteur) In this paper, a deep convolutional neural network (CNN) is developed to classify the Unmanned Aerial Vehicle (UAV) derived multispectral imagery and normalized digital surface model (DSM) data in urban areas. For this purpose, a multi-input deep CNN (MIDCNN) architecture is designed using 11 parallel CNNs; 10 deep CNNs to extract the features from all possible triple combinations of spectral bands as well as one deep CNN dedicated to the normalized DSM data. The proposed method is compared with the traditional single-input (SI) and double-input (DI) deep CNN designations and random forest (RF) classifier, and evaluated using two independent test datasets. The results indicate that increasing the CNN layers parallelly augmented the classifier’s generalization and reduced overfitting risk. The overall accuracy and kappa value of the proposed method are 95% and 0.93, respectively, for the first test dataset, and 96% and 0.94, respectively, for the second test data set. Numéro de notice : A2022-749 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2021.1959655 Date de publication en ligne : 04/08/2021 En ligne : https://doi.org/10.1080/10106049.2021.1959655 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101741
in Geocarto international > vol 37 n° 23 [15/10/2022] . - pp 6695 - 6712[article]Land use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information / Ozlem Akar in Geocarto international, vol 37 n° 22 ([10/10/2022])
[article]
Titre : Land use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information Type de document : Article/Communication Auteurs : Ozlem Akar, Auteur ; Esra Tunc Gormus, Auteur Année de publication : 2022 Article en page(s) : pp 6643 - 6670 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] carte d'occupation du sol
[Termes IGN] carte de la végétation
[Termes IGN] classification orientée objet
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] filtre de Gabor
[Termes IGN] image hyperspectrale
[Termes IGN] matrice de co-occurrence
[Termes IGN] niveau de gris (image)
[Termes IGN] texture d'image
[Termes IGN] transformation en ondelettes
[Termes IGN] TurquieRésumé : (auteur) Land use and Land cover (LULC) mapping is one of the most important application areas of remote sensing which requires both spectral and spatial resolutions in order to decrease the spectral ambiguity of different land cover types. Airborne hyperspectral images are among those data which perfectly suits to that kind of applications because of their high number of spectral bands and the ability to see small details on the field. As this technology has newly developed, most of the image processing methods are for the medium resolution sensors and they are not capable of dealing with high resolution images. Therefore, in this study a new framework is proposed to improve the classification accuracy of land use/cover mapping applications and to achieve a greater reliability in the process of mapping land use map using high resolution hyperspectral image data. In order to achieve it, spatial information is incorporated together with spectral information by exploiting feature extraction methods like Grey Level Co-occurrence Matrix (GLCM), Gabor and Morphological Attribute Profile (MAP) on dimensionally reduced image with highest accuracy. Then, machine learning algorithms like Random Forest (RF) and Support Vector Machine (SVM) are used to investigate the contribution of texture information in the classification of high resolution hyperspectral images. In addition to that, further analysis is conducted with object based RF classification to investigate the contribution of contextual information. Finally, overall accuracy, producer’s/user’s accuracy, the quantity and allocation based disagreements and location and quantity based kappa agreements are calculated together with McNemar tests for the accuracy assessment. According to our results, proposed framework which incorporates Gabor texture information and exploits Discrete Wavelet Transform based dimensionality reduction method increase the overall classification accuracy up to 9%. Amongst individual classes, Gabor features boosted classification accuracies of all the classes (soil, road, vegetation, building and shadow) to 7%, 6%, 6%, 8%, 9%, and 24% respectively with producer’s accuracy. Besides, 17% and 10% increase obtained in user’s accuracy with MAP (area) feature in classifying road and shadow classes respectively. Moreover, when the object based classification is conducted, it is seen that the OA of pixel based classification is increased further by 1.07%. An increase between 2% and 4% is achieved with producer’s accuracy in soil, vegetation and building classes and an increase between 1% and 3% is achieved by user’s accuracy in soil, road, vegetation and shadow classes. In the end, accurate LULC map is produced with object based RF classification of gabor features added airborne hyperspectral image which is dimensionally reduced with DWT method. Numéro de notice : A2022-729 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2021.1944453 Date de publication en ligne : 09/11/2021 En ligne : https://doi.org/10.1080/10106049.2021.1944453 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101675
in Geocarto international > vol 37 n° 22 [10/10/2022] . - pp 6643 - 6670[article]Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)
[article]
Titre : Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope Type de document : Article/Communication Auteurs : V.S. Martins, Auteur ; D.P. Roy, Auteur ; H. Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113203 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Afrique (géographie politique)
[Termes IGN] apprentissage profond
[Termes IGN] carte thématique
[Termes IGN] cartographie automatique
[Termes IGN] correction radiométrique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] forêt tropicale
[Termes IGN] image Landsat-OLI
[Termes IGN] image PlanetScope
[Termes IGN] incendie
[Termes IGN] précision de la classification
[Termes IGN] régression
[Termes IGN] savaneRésumé : (auteur) High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches Numéro de notice : A2022-774 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113203 Date de publication en ligne : 08/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113203 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101802
in Remote sensing of environment > vol 280 (October 2022) . - n° 113203[article]Evaluation of Landsat 8 image pansharpening in estimating soil organic matter using multiple linear regression and artificial neural networks / Abdelkrim Bouasria in Geo-spatial Information Science, vol 25 n° 3 (October 2022)
[article]
Titre : Evaluation of Landsat 8 image pansharpening in estimating soil organic matter using multiple linear regression and artificial neural networks Type de document : Article/Communication Auteurs : Abdelkrim Bouasria, Auteur ; Khalid Ibno Namra, Auteur ; Abdelmejid Rahimi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 353 - 364 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] état du sol
[Termes IGN] image Landsat-OLI
[Termes IGN] image panchromatique
[Termes IGN] Maroc
[Termes IGN] matière organique
[Termes IGN] modèle de simulation
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] Perceptron multicouche
[Termes IGN] régression multiple
[Termes IGN] réseau neuronal artificielRésumé : (auteur) In agricultural systems, the regular monitoring of Soil Organic Matter (SOM) dynamics is essential. This task is costly and time-consuming when using the conventional method, especially in a very fragmented area and with intensive agricultural activity, such as the area of Sidi Bennour. The study area is located in the Doukkala irrigated perimeter in Morocco. Satellite data can provide an alternative and fill this gap at a low cost. Models to predict SOM from a satellite image, whether linear or nonlinear, have shown considerable interest. This study aims to compare SOM prediction using Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN). A total of 368 points were collected at a depth of 0–30 cm and analyzed in the laboratory. An image at 15 m resolution (MSPAN) was produced from a 30 m resolution (MS) Landsat-8 image using image pansharpening processing and panchromatic band (15 m). The results obtained show that the MLR models predicted the SOM with (training/validation) R2 values of 0.62/0.63 and 0.64/0.65 and RMSE values of 0.23/0.22 and 0.22/0.21 for the MS and MSPAN images, respectively. In contrast, the ANN models predicted SOM with R2 values of 0.65/0.66 and 0.69/0.71 and RMSE values of 0.22/0.10 and 0.21/0.18 for the MS and MSPAN images, respectively. Image pansharpening improved the prediction accuracy by 2.60% and 4.30% and reduced the estimation error by 0.80% and 1.30% for the MLR and ANN models, respectively. Numéro de notice : A2022-722 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10095020.2022.2026743 Date de publication en ligne : 15/02/2022 En ligne : https://doi.org/10.1080/10095020.2022.2026743 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101665
in Geo-spatial Information Science > vol 25 n° 3 (October 2022) . - pp 353 - 364[article]Investigation of recognition and classification of forest fires based on fusion color and textural features of images / Cong Li in Forests, vol 13 n° 10 (October 2022)
[article]
Titre : Investigation of recognition and classification of forest fires based on fusion color and textural features of images Type de document : Article/Communication Auteurs : Cong Li, Auteur ; Qiang Liu, Auteur ; Binrui Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1719 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse texturale
[Termes IGN] base de données d'images
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] incendie de forêt
[Termes IGN] matrice de co-occurrence
[Termes IGN] motif binaire local
[Termes IGN] niveau de gris (image)Résumé : (auteur) An image recognition and classification method based on fusion color and textural features was studied. Firstly, the suspected forest fire region was segmented via the fusion RGB-YCbCr color spaces. Then, 10 kinds of textural features were extracted by a local binary pattern (LBP) algorithm and 4 kinds of textural features were extracted by a gray-level co-occurrence matrix (GLCM) algorithm from the suspected fire region. In terms of its application, a database of the forest fire textural feature vector of three scenes was constructed, including forest images without fire, forest images with fire, and forest images with fire-like interference. The existence of forest fires can be recognized based on the database via a support vector machine (SVM). The results showed that the method’s recognition rate for forest fires reached 93.15% and that it had a strong robustness with respect to distinguishing fire-like interference, which provides a more effective scheme for forest fire recognition. Numéro de notice : A2022-834 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/f13101719 Date de publication en ligne : 18/10/2022 En ligne : https://doi.org/10.3390/f13101719 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102030
in Forests > vol 13 n° 10 (October 2022) . - n° 1719[article]Potential and limitation of PlanetScope images for 2-D and 3-D Earth surface monitoring with example of applications to glaciers and earthquakes / Saif Aati in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkA relation-augmented embedded graph attention network for remote sensing object detection / Shu Tian in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkSemi-supervised adversarial recognition of refined window structures for inverse procedural façade modelling / Han Hu in ISPRS Journal of photogrammetry and remote sensing, vol 192 (October 2022)PermalinkSingle-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms / Yadong Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkThe iterative convolution–thresholding method (ICTM) for image segmentation / Dong Wang in Pattern recognition, vol 130 (October 2022)PermalinkComparing Landsat-8 and Sentinel-2 top of atmosphere and surface reflectance in high latitude regions: case study in Alaska / Jiang Chen in Geocarto international, vol 37 n° 20 ([20/09/2022])PermalinkThe FIRST model: Spatiotemporal fusion incorrporting spectral autocorrelation / Shuaijun Liu in Remote sensing of environment, vol 279 (September-15 2022)PermalinkAnalytical method for high-precision seabed surface modelling combining B-spline functions and Fourier series / Tyler Susa in Marine geodesy, vol 45 n° 5 (September 2022)PermalinkCrowdsourcing-based application to solve the problem of insufficient training data in deep learning-based classification of satellite images / Ekrem Saralioglu in Geocarto international, vol 37 n° 18 ([01/09/2022])PermalinkDeep image deblurring: A survey / Kaihao Zhang in International journal of computer vision, vol 130 n° 9 (September 2022)Permalink