Descripteur
Documents disponibles dans cette catégorie (9742)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Building footprint extraction in Yangon city from monocular optical satellite image using deep learning / Hein Thura Aung in Geocarto international, vol 37 n° 3 ([01/02/2022])
![]()
[article]
Titre : Building footprint extraction in Yangon city from monocular optical satellite image using deep learning Type de document : Article/Communication Auteurs : Hein Thura Aung, Auteur ; Sao Hone Pha, Auteur ; Wataru Takeuchi, Auteur Année de publication : 2022 Article en page(s) : pp 792 - 812 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Birmanie
[Termes IGN] détection du bâti
[Termes IGN] empreinte
[Termes IGN] image Geoeye
[Termes IGN] image isolée
[Termes IGN] réseau antagoniste génératif
[Termes IGN] vision monoculaireRésumé : (auteur) In this research, building footprints in Yangon City, Myanmar are extracted only from monocular optical satellite image by using conditional generative adversarial network (CGAN). Both training dataset and validating dataset are created from GeoEYE image of Dagon Township in Yangon City. Eight training models are created according to the change of values in three training parameters; learning rate, β1 term of Adam, and number of filters in the first convolution layer of the generator and the discriminator. The images of the validating dataset are divided into four image groups; trees, buildings, mixed trees and buildings, and pagodas. The output images of eight trained models are transformed to the vector images and then evaluated by comparing with manually digitized polygons using completeness, correctness and F1 measure. According to the results, by using CGAN, building footprints can be extracted up to 71% of completeness, 81% of correctness and 69% of F1 score from only monocular optical satellite image. Numéro de notice : A2022-345 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1740949 Date de publication en ligne : 20/03/2020 En ligne : https://doi.org/10.1080/10106049.2020.1740949 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100526
in Geocarto international > vol 37 n° 3 [01/02/2022] . - pp 792 - 812[article]A combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)
![]()
[article]
Titre : A combination of convolutional and graph neural networks for regularized road surface extraction Type de document : Article/Communication Auteurs : Jingjing Yan, Auteur ; Shunping Ji, Auteur ; Yao Wei, Auteur Année de publication : 2022 Article en page(s) : n° 4409113 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Bavière (Allemagne)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] extraction du réseau routier
[Termes IGN] image aérienne
[Termes IGN] jeu de données
[Termes IGN] optimisation (mathématiques)
[Termes IGN] régression
[Termes IGN] réseau neuronal de graphes
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Road surface extraction from high-resolution remote sensing images has many engineering applications; however, extracting regularized and smooth road surface maps that reach the human delineation level is a very challenging task, and substantial and time-consuming manual work is usually unavoidable. In this article, to solve this problem, we propose a novel regularized road surface extraction framework by introducing a graph neural network (GNN) for processing the road graph that is preconstructed from the easily accessible road centerlines. The proposed framework formulates the road surface extraction problem as two-sided width inference of the road graph and consists of a convolutional neural network (CNN)-based feature extractor and a GNN model for vertex attribute adjustment. The CNN extracts the high-level abstract features of each vertex in the graph as the input of the GNN and also the road boundary features that allow us to distinguish roads from the background. The GNN propagates and aggregates the features of the vertices in the graph to achieve global optimization of the regression of the regularized widths of the vertices. At the same time, a biased centerline map can also be corrected based on the width prediction result. To the best of the authors’ knowledge, this is the first study to have introduced a GNN to regularized human-level road surface extraction. The proposed method was evaluated on four diverse datasets, and the results show that the proposed method comprehensively outperforms the recent CNN-based segmentation methods and other regularization methods in the intersection over union (IoU) and smoothness score, and a visual check shows that a majority of the prediction results of the proposed method approach the human delineation level. Numéro de notice : A2022-297 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3151688 Date de publication en ligne : 15/02/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3151688 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100355
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 2 (February 2022) . - n° 4409113[article]Decision fusion of deep learning and shallow learning for marine oil spill detection / Junfang Yang in Remote sensing, vol 14 n° 3 (February-1 2022)
![]()
[article]
Titre : Decision fusion of deep learning and shallow learning for marine oil spill detection Type de document : Article/Communication Auteurs : Junfang Yang, Auteur ; Yi Ma, Auteur ; Yabin Hu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 666 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de fusion
[Termes IGN] analyse multiéchelle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] hydrocarbure
[Termes IGN] image hyperspectrale
[Termes IGN] marée noire
[Termes IGN] milieu marin
[Termes IGN] pollution des mers
[Termes IGN] précision de la classification
[Termes IGN] sous ensemble flou
[Termes IGN] surveillance écologique
[Termes IGN] transformation en ondelettesRésumé : (auteur) Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which often limit feature characterization using a single classifier and therefore difficult to accurate monitoring of marine oil spills. In this paper, we develop a decision fusion algorithm to integrate deep learning methods and shallow learning methods based on multi-scale features for improving oil spill detection accuracy in the case of limited samples. Based on the multi-scale features after wavelet transform, two deep learning methods and two classical shallow learning algorithms are used to extract oil slick information from hyperspectral oil spill images. The decision fusion algorithm based on fuzzy membership degree is introduced to fuse multi-source oil spill information. The research shows that oil spill detection accuracy using the decision fusion algorithm is higher than that of the single detection algorithms. It is worth noting that oil spill detection accuracy is affected by different scale features. The decision fusion algorithm under the first-level scale features can further improve the accuracy of oil spill detection. The overall classification accuracy of the proposed method is 91.93%, which is 2.03%, 2.15%, 1.32%, and 0.43% higher than that of SVM, DBN, 1D-CNN, and MRF-CNN algorithms, respectively. Numéro de notice : A2022-125 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14030666 Date de publication en ligne : 30/01/2022 En ligne : https://doi.org/10.3390/rs14030666 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99688
in Remote sensing > vol 14 n° 3 (February-1 2022) . - n° 666[article]Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)
![]()
[article]
Titre : Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation Type de document : Article/Communication Auteurs : Ramazan Unlu, Auteur ; Recep Kiriş, Auteur Année de publication : 2022 Article en page(s) : pp 685 - 694 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] bâtiment
[Termes IGN] classification par nuées dynamiques
[Termes IGN] détection de changement
[Termes IGN] dommage matériel
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation d'image
[Termes IGN] séismeRésumé : (auteur) Detecting damaged buildings after an earthquake as quickly as possible is important for emergency teams to reach these buildings and save the lives of many people. Today, damaged buildings after the earthquake are carried out by the survivors contacting the authorities or using some air vehicles such as helicopters. In this study, AI-based systems were tested to detect damaged or destroyed buildings by integrating into street camera systems after unexpected disasters. For this purpose, we have used VGG-16, VGG-19, and NASNet convolutional neural network models which are often used for image recognition problems in the literature to detect damaged buildings. In order to effectively implement these models, we have first segmented all the images with the K-means clustering algorithm. Thereafter, for the first phase of this study, segmented images labeled “damaged buildings” and “normal” were classified and the VGG-19 model was the most successful model with a 90% accuracy in the test set. Besides, as the second phase of the study, we have created a multiclass classification problem by labeling segmented images as “damaged buildings,” “less damaged buildings,” and “normal.” The same three architectures are used to achieve the most accurate classification results on the test set. VGG-19 and VGG-16, and NASNet have achieved considerable success in the test set with about 70%, 67%, and 62% accuracy, respectively. Numéro de notice : A2022-145 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-020-02043-9 Date de publication en ligne : 03/01/2022 En ligne : https://doi.org/10.1007/s00371-020-02043-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100039
in The Visual Computer > vol 38 n° 2 (February 2022) . - pp 685 - 694[article]Dynamic modelling of rice leaf area index with quad-source optical imagery and machine learning regression models / Lamin R. Mansaray in Geocarto international, vol 37 n° 3 ([01/02/2022])
![]()
[article]
Titre : Dynamic modelling of rice leaf area index with quad-source optical imagery and machine learning regression models Type de document : Article/Communication Auteurs : Lamin R. Mansaray, Auteur ; Adam Sheka Kanu, Auteur ; Lingbo Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 828 - 840 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] Chine
[Termes IGN] classification barycentrique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] Extreme Gradient Machine
[Termes IGN] Green Leaf Area Index
[Termes IGN] image Gaofen
[Termes IGN] image HJ-1A
[Termes IGN] image HJ-1B
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-MSI
[Termes IGN] indice foliaire
[Termes IGN] modèle de régression
[Termes IGN] rizièreRésumé : (auteur) Optical satellite imagery has been widely used to monitor leaf area index (LAI). However, most studies have focussed on single- or dual-source data, thus making little use of a growing repository of freely available optical imagery. Hence this study has evaluated the feasibility of quad-source optical satellite imagery involving Landsat-8, Sentinel-2A, China’s environment satellite constellation (HJ-1 A and B) and Gaofen-1 (GF-1) in modelling rice green LAI over a test site located in southeast China at two growing seasons. With the application of machine learning regression models including Random Forest (RF), Support Vector Machine (SVM), k-Nearest Neighbour (k-NN) and Gradient Boosting Decision Tree (GBDT), results indicated that regression models based on an ensemble of decision trees (RF and GBDT) were more suitable for modelling rice green LAI. The current study has demonstrated the feasibility of quad-source optical imagery in modelling rice green LAI and this is relevant for cloudy areas. Numéro de notice : A2022-346 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1745299 Date de publication en ligne : 03/04/2020 En ligne : https://doi.org/10.1080/10106049.2020.1745299 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100530
in Geocarto international > vol 37 n° 3 [01/02/2022] . - pp 828 - 840[article]Emerging technologies for smart cities’ transportation: Geo-information, data analytics and machine learning approaches / Li-Minn Ang in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)
PermalinkExploring the advantages of the maximum entropy model in calibrating cellular automata for urban growth simulation: a comparative study of four methods / Bin Zhang in GIScience and remote sensing, vol 59 n° 1 (2022)
PermalinkFast local adaptive multiscale image matching algorithm for remote sensing image correlation / Niccolò Dematteis in Computers & geosciences, vol 159 (February 2022)
PermalinkGazPNE: annotation-free deep learning for place name extraction from microblogs leveraging gazetteer and synthetic data by rules / Xuke Hu in International journal of geographical information science IJGIS, vol 36 n° 2 (February 2022)
PermalinkGCN-Denoiser: mesh denoising with graph convolutional networks / Yuefan Shen in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)
PermalinkGenerating 2m fine-scale urban tree cover product over 34 metropolises in China based on deep context-aware sub-pixel mapping network / Da He in International journal of applied Earth observation and geoinformation, vol 106 (February 2022)
PermalinkA geographically weighted artificial neural network / Julian Haguenauer in International journal of geographical information science IJGIS, vol 36 n° 2 (February 2022)
PermalinkGisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)
PermalinkGNSS reflectometry global ocean wind speed using deep learning: Development and assessment of CyGNSSnet / Milad Asgarimehr in Remote sensing of environment, vol 269 (February 2022)
PermalinkMapping global flying aircraft activities using Landsat 8 and cloud computing / Fen Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 184 (February 2022)
Permalink