Descripteur
Documents disponibles dans cette catégorie (169)



Etendre la recherche sur niveau(x) vers le bas
Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image / Taposh Mollick in Remote Sensing Applications: Society and Environment, RSASE, vol 29 (January 2023)
![]()
[article]
Titre : Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image Type de document : Article/Communication Auteurs : Taposh Mollick, Auteur ; MD Golam Azam, Auteur ; Sabrina Karim, Auteur Année de publication : 2023 Article en page(s) : n° 100859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] Bangladesh
[Termes IGN] classification non dirigée
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par nuées dynamiques
[Termes IGN] classification pixellaire
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] occupation du sol
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] utilisation du solRésumé : (auteur) Bangladesh is primarily an agricultural country where technological advancement in the agricultural sector can ensure the acceleration of economic growth and ensure long-term food security. This research was conducted in the south-western coastal zone of Bangladesh, where rice is the main crop and other crops are also grown. Land use and land cover (LULC) classification using remote sensing techniques such as the use of satellite or unmanned aerial vehicle (UAV) images can forecast the crop yield and can also provide information on weeds, nutrient deficiencies, diseases, etc. to monitor and treat the crops. Depending on the reflectance received by sensors, remotely sensed images store a digital number (DN) for each pixel. Traditionally, these pixel values have been used to separate clusters and classify various objects. However, it frequently generates a lot of discontinuity in a particular land cover, resulting in small objects within a land cover that provide poor image classification output. It is called the salt-and-pepper effect. In order to classify land cover based on texture, shape, and neighbors, Pixel-Based Image Analysis (PBIA) and Object-Based Image Analysis (OBIA) methods use digital image classification algorithms like Maximum Likelihood (ML), K-Nearest Neighbors (KNN), k-means clustering algorithm, etc. to smooth this discontinuity. The authors evaluated the accuracy of both the PBIA and OBIA approaches by classifying the land cover of an agricultural field, taking into consideration the development of UAV technology and enhanced image resolution. For classifying multispectral UAV images, we used the KNN machine learning algorithm for object-based supervised image classification and Maximum Likelihood (ML) classification (parametric) for pixel-based supervised image classification. Whereas, for unsupervised classification using pixels, we used the K-means clustering technique. For image analysis, Near-infrared (NIR), Red (R), Green (G), and Blue (B) bands of a high-resolution ground sampling distance (GSD) 0.0125m UAV image was used in this research work. The study found that OBIA was 21% more accurate than PBIA, indicating 94.9% overall accuracy. In terms of Kappa statistics, OBIA was 27% more accurate than PBIA, indicating Kappa statistics accuracy of 93.4%. It indicates that OBIA provides better classification performance when compared to PBIA for the classification of high-resolution UAV images. This study found that by suggesting OBIA for more accurate identification of types of crops and land cover, which will help crop management, agricultural monitoring, and crop yield forecasting be more effective. Numéro de notice : A2023-021 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rsase.2022.100859 Date de publication en ligne : 22/11/2022 En ligne : https://doi.org/10.1016/j.rsase.2022.100859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102224
in Remote Sensing Applications: Society and Environment, RSASE > vol 29 (January 2023) . - n° 100859[article]Change alignment-based image transformation for unsupervised heterogeneous change detection / Kuowei Xiao in Remote sensing, vol 14 n° 21 (November-1 2022)
![]()
[article]
Titre : Change alignment-based image transformation for unsupervised heterogeneous change detection Type de document : Article/Communication Auteurs : Kuowei Xiao, Auteur ; Yuli Sun, Auteur ; Lin Lei, Auteur Année de publication : 2022 Article en page(s) : n° 5622 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] alignement
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] décomposition d'image
[Termes IGN] détection de changement
[Termes IGN] données hétérogènes
[Termes IGN] masqueRésumé : (auteur) Change detection (CD) with heterogeneous images is currently attracting extensive attention in remote sensing. In order to make heterogeneous images comparable, the image transformation methods transform one image into the domain of another image, which can simultaneously obtain a forward difference map (FDM) and backward difference map (BDM). However, previous methods only fuse the FDM and BDM in the post-processing stage, which cannot fundamentally improve the performance of CD. In this paper, a change alignment-based change detection (CACD) framework for unsupervised heterogeneous CD is proposed to deeply utilize the complementary information of the FDM and BDM in the image transformation process, which enhances the effect of domain transformation, thus improving CD performance. To reduce the dependence of the transformation network on labeled samples, we propose a graph structure-based strategy of generating prior masks to guide the network, which can reduce the influence of changing regions on the transformation network in an unsupervised way. More importantly, based on the fact that the FDM and BDM are representing the same change event, we perform change alignment during the image transformation, which can enhance the image transformation effect and enable FDM and BDM to effectively indicate the real change region. Comparative experiments are conducted with six state-of-the-art methods on five heterogeneous CD datasets, showing that the proposed CACD achieves the best performance with an average overall accuracy (OA) of 95.9% on different datasets and at least 6.8% improvement in the kappa coefficient. Numéro de notice : A2022-855 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14215622 Date de publication en ligne : 07/11/2022 En ligne : https://doi.org/10.3390/rs14215622 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102103
in Remote sensing > vol 14 n° 21 (November-1 2022) . - n° 5622[article]Unsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)
![]()
[article]
Titre : Unsupervised multi-view CNN for salient view selection and 3D interest point detection Type de document : Article/Communication Auteurs : Ran Song, Auteur ; Wei Zhang, Auteur ; Yitian Zhao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1210 - 1227 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] objet 3D
[Termes IGN] point d'intérêt
[Termes IGN] saillanceRésumé : (auteur) We present an unsupervised 3D deep learning framework based on a ubiquitously true proposition named by us view-object consistency as it states that a 3D object and its projected 2D views always belong to the same object class. To validate its effectiveness, we design a multi-view CNN instantiating it for salient view selection and interest point detection of 3D objects, which quintessentially cannot be handled by supervised learning due to the difficulty of collecting sufficient and consistent training data. Our unsupervised multi-view CNN, namely UMVCNN, branches off two channels which encode the knowledge within each 2D view and the 3D object respectively and also exploits both intra-view and inter-view knowledge of the object. It ends with a new loss layer which formulates the view-object consistency by impelling the two channels to generate consistent classification outcomes. The UMVCNN is then integrated with a global distinction adjustment scheme to incorporate global cues into salient view selection. We evaluate our method for salient view section both qualitatively and quantitatively, demonstrating its superiority over several state-of-the-art methods. In addition, we showcase that our method can be used to select salient views of 3D scenes containing multiple objects. We also develop a method based on the UMVCNN for 3D interest point detection and conduct comparative evaluations on a publicly available benchmark, which shows that the UMVCNN is amenable to different 3D shape understanding tasks. Numéro de notice : A2022-415 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01592-x Date de publication en ligne : 16/03/2022 En ligne : https://doi.org/10.1007/s11263-022-01592-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100771
in International journal of computer vision > vol 130 n° 5 (May 2022) . - pp 1210 - 1227[article]Automatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles / Leilei Zhang in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)
![]()
[article]
Titre : Automatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles Type de document : Article/Communication Auteurs : Leilei Zhang, Auteur ; Guoxin Wang, Auteur ; Weijian Sun, Auteur Année de publication : 2022 Article en page(s) : pp 453 - 475 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] classification barycentrique
[Termes IGN] classification non dirigée
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] extraction automatique
[Termes IGN] image captée par drone
[Termes IGN] image oblique
[Termes IGN] modèle numérique de surface
[Termes IGN] orthophotocarte
[Termes IGN] précision géométrique (imagerie)Résumé : (auteur) This paper introduces a method based on centroid clustering and contour analysis to extract area and height measurements on buildings from the 3D model generated by oblique images. The method comprises three steps: (1) extract the contour plane from the fused data of the digital surface model (DSM) and digital orthophoto map (DOM); (2) identify building contour clusters based on the number of centroids contained in each category determined by mean-shift centroid clustering; (3) remove the mis-identified contours in a given building contour cluster by a contour analysis and obtain the geometric information of the building using map algebra. The proposed approach was tested against four datasets. Compared with other results, the detection has effective completeness, correctness, quality, and higher geometric accuracy. The maximum average relative error of building height and area extraction is less than 8%. The method is fast for a large-scale collection of building attributes and improves the applicability of oblique photography in GIS. Numéro de notice : A2022-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1937632 Date de publication en ligne : 14/06/2021 En ligne : https://doi.org/10.1080/13658816.2021.1937632 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100020
in International journal of geographical information science IJGIS > vol 36 n° 3 (March 2022) . - pp 453 - 475[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 079-2022031 SL Revue Centre de documentation Revues en salle Disponible Neural map style transfer exploration with GANs / Sidonie Christophe in International journal of cartography, vol 8 n° 1 (March 2022)
![]()
[article]
Titre : Neural map style transfer exploration with GANs Type de document : Article/Communication Auteurs : Sidonie Christophe , Auteur ; Samuel Mermet
, Auteur ; Morgan Laurent, Auteur ; Guillaume Touya
, Auteur
Année de publication : 2022 Projets : 1-Pas de projet / Article en page(s) : pp 18 - 36 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] grille d'échantillonnage
[Termes IGN] orthoimage
[Termes IGN] représentation cartographique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] style cartographique
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Neural Style Transfer is a Computer Vision topic intending to transfer the visual appearance or the style of images to other images. Developments in deep learning nicely generate stylized images from texture-based examples or transfer the style of a photograph to another one. In map design, the style is a multi-dimensional complex problem related to recognizable visual salient features and topological arrangements, supporting the description of geographic spaces at a specific scale. The map style transfer is still at stake to generate a diversity of possible new styles to render geographical features. Generative adversarial Networks (GANs) techniques, well supporting image-to-image translation tasks, offer new perspectives for map style transfer. We propose to use accessible GAN architectures, in order to experiment and assess neural map style transfer to ortho-images, while using different map designs of various geographic spaces, from simple-styled (Plan maps) to complex-styled (old Cassini, Etat-Major, or Scan50 B&W). This transfer task and our global protocol are presented, including the sampling grid, the training and test of Pix2Pix and CycleGAN models, such as the perceptual assessment of the generated outputs. Promising results are discussed, opening research issues for neural map style transfer exploration with GANs. Numéro de notice : A2022-172 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/23729333.2022.2031554 Date de publication en ligne : 13/02/2022 En ligne : https://doi.org/10.1080/23729333.2022.2031554 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99807
in International journal of cartography > vol 8 n° 1 (March 2022) . - pp 18 - 36[article]Probabilistic unsupervised classification for large-scale analysis of spectral imaging data / Emmanuel Paradis in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)
PermalinkApprentissage de représentations et modèles génératifs profonds dans les systèmes dynamiques / Jean-Yves Franceschi (2022)
PermalinkDeep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)
PermalinkPermalinkFlexible Gabor-based superpixel-level unsupervised LDA for hyperspectral image classification / Sen Jia in IEEE Transactions on geoscience and remote sensing, vol 59 n° 12 (December 2021)
PermalinkA feature based change detection approach using multi-scale orientation for multi-temporal SAR images / R. Vijaya Geetha in European journal of remote sensing, vol 54 sup 2 (2021)
PermalinkUnsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification / Ming Cong in Geocarto international, vol 36 n° 18 ([01/10/2021])
PermalinkUnsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network / Fengpeng Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 8 (August 2021)
PermalinkUnsupervised denoising for satellite imagery using wavelet directional cycleGAN / Shaoyang Kong in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
PermalinkComparison of classification methods for urban green space extraction using very high resolution worldview-3 imagery / S. Vigneshwaran in Geocarto international, vol 36 n° 13 ([15/07/2021])
Permalink