Descripteur
Documents disponibles dans cette catégorie (8656)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Simultaneous retrieval of selected optical water quality indicators from Landsat-8, Sentinel-2, and Sentinel-3 / Nima Pahlevan in Remote sensing of environment, vol 270 (March 2022)
![]()
[article]
Titre : Simultaneous retrieval of selected optical water quality indicators from Landsat-8, Sentinel-2, and Sentinel-3 Type de document : Article/Communication Auteurs : Nima Pahlevan, Auteur ; Brandon Smith, Auteur ; Krista Alikas, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 112860 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse des mélanges spectraux
[Termes IGN] appariement d'images
[Termes IGN] apprentissage automatique
[Termes IGN] chlorophylle
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] correction atmosphérique
[Termes IGN] données multisources
[Termes IGN] eaux côtières
[Termes IGN] image Landsat-OLI
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-OLCI
[Termes IGN] matière organique
[Termes IGN] Oregon (Etats-Unis)
[Termes IGN] qualité des eauxRésumé : (auteur) Constructing multi-source satellite-derived water quality (WQ) products in inland and nearshore coastal waters from the past, present, and future missions is a long-standing challenge. Despite inherent differences in sensors’ spectral capability, spatial sampling, and radiometric performance, research efforts focused on formulating, implementing, and validating universal WQ algorithms continue to evolve. This research extends a recently developed machine-learning (ML) model, i.e., Mixture Density Networks (MDNs) (Pahlevan et al., 2020; Smith et al., 2021), to the inverse problem of simultaneously retrieving WQ indicators, including chlorophyll-a (Chla), Total Suspended Solids (TSS), and the absorption by Colored Dissolved Organic Matter at 440 nm (acdom(440)), across a wide array of aquatic ecosystems. We use a database of in situ measurements to train and optimize MDN models developed for the relevant spectral measurements (400–800 nm) of the Operational Land Imager (OLI), MultiSpectral Instrument (MSI), and Ocean and Land Color Instrument (OLCI) aboard the Landsat-8, Sentinel-2, and Sentinel-3 missions, respectively. Our two performance assessment approaches, namely hold-out and leave-one-out, suggest significant, albeit varying degrees of improvements with respect to second-best algorithms, depending on the sensor and WQ indicator (e.g., 68%, 75%, 117% improvements based on the hold-out method for Chla, TSS, and acdom(440), respectively from MSI-like spectra). Using these two assessment methods, we provide theoretical upper and lower bounds on model performance when evaluating similar and/or out-of-sample datasets. To evaluate multi-mission product consistency across broad spatial scales, map products are demonstrated for three near-concurrent OLI, MSI, and OLCI acquisitions. Overall, estimated TSS and acdom(440) from these three missions are consistent within the uncertainty of the model, but Chla maps from MSI and OLCI achieve greater accuracy than those from OLI. By applying two different atmospheric correction processors to OLI and MSI images, we also conduct matchup analyses to quantify the sensitivity of the MDN model and best-practice algorithms to uncertainties in reflectance products. Our model is less or equally sensitive to these uncertainties compared to other algorithms. Recognizing their uncertainties, MDN models can be applied as a global algorithm to enable harmonized retrievals of Chla, TSS, and acdom(440) in various aquatic ecosystems from multi-source satellite imagery. Local and/or regional ML models tuned with an apt data distribution (e.g., a subset of our dataset) should nevertheless be expected to outperform our global model. Numéro de notice : A2022-126 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2021.112860 Date de publication en ligne : 04/01/2022 En ligne : https://doi.org/10.1016/j.rse.2021.112860 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99705
in Remote sensing of environment > vol 270 (March 2022) . - n° 112860[article]Traffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)
![]()
[article]
Titre : Traffic sign three-dimensional reconstruction based on point clouds and panoramic images Type de document : Article/Communication Auteurs : Minye Wang, Auteur ; Rufei Liu, Auteur ; Jiben Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 87 - 110 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image panoramique
[Termes IGN] lidar mobile
[Termes IGN] reconstruction 3D
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) Traffic signs are a very important source of information for drivers and pilotless automobiles. With the advance of Mobile LiDAR System (MLS), massive point clouds have been applied in three-dimensional digital city modelling. However, traffic signs in MLS point clouds are low density, colourless and incomplete. This paper presents a new method for the reconstruction of vertical rectangle traffic sign point clouds based on panoramic images. In this method, traffic sign point clouds are extracted based on arc feature and spatial semantic features analysis. Traffic signs in images are detected by colour and shape features and a convolutional neural network. Traffic sign point cloud and images are registered based on outline features. Finally, traffic sign points match traffic sign pixels to reconstruct the traffic sign point cloud. Experimental results have demonstrated that this proposed method can effectively obtain colourful and complete traffic sign point clouds with high resolution. Numéro de notice : A2022-254 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12398 Date de publication en ligne : 05/03/2022 En ligne : https://doi.org/10.1111/phor.12398 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100217
in Photogrammetric record > vol 37 n° 177 (March 2022) . - pp 87 - 110[article]Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)
![]()
[article]
Titre : Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach Type de document : Article/Communication Auteurs : Linyuan Li, Auteur ; Xihan Mu, Auteur ; Francesco Chianucci, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102686 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme SLIC
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] Chine
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] faisceau laser
[Termes IGN] forêt boréale
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] modèle numérique de terrain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] sous-étage
[Termes IGN] structure-from-motionRésumé : (auteur) Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). Numéro de notice : A2022-192 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102686 Date de publication en ligne : 05/02/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99951
in International journal of applied Earth observation and geoinformation > vol 107 (March 2022) . - n° 102686[article]Visual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)
![]()
[article]
Titre : Visual vs internal attention mechanisms in deep neural networks for image classification and object detection Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Mireya S. García Vázquez, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108411 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] oculométrie
[Termes IGN] saillance
[Termes IGN] segmentation sémantique
[Termes IGN] visualisation de donnéesRésumé : (auteur) The so-called “attention mechanisms” in Deep Neural Networks (DNNs) denote an automatic adaptation of DNNs to capture representative features given a specific classification task and related data. Such attention mechanisms perform both globally by reinforcing feature channels and locally by stressing features in each feature map. Channel and feature importance are learnt in the global end-to-end DNNs training process. In this paper, we present a study and propose a method with a different approach, adding supplementary visual data next to training images. We use human visual attention maps obtained independently with psycho-visual experiments, both in task-driven or in free viewing conditions, or powerful models for prediction of visual attention maps. We add visual attention maps as new data alongside images, thus introducing human visual attention into the DNNs training and compare it with both global and local automatic attention mechanisms. Experimental results show that known attention mechanisms in DNNs work pretty much as human visual attention, but still the proposed approach allows a faster convergence and better performance in image classification tasks. Numéro de notice : A2022-197 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2021.108411 Date de publication en ligne : 12/11/2021 En ligne : https://doi.org/10.1016/j.patcog.2021.108411 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99988
in Pattern recognition > vol 123 (March 2022) . - n° 108411[article]Aboveground biomass estimation of an agro-pastoral ecology in semi-arid Bundelkhand region of India from Landsat data: a comparison of support vector machine and traditional regression models / Dibyendu Deb in Geocarto international, vol 37 n° 4 ([15/02/2022])
![]()
[article]
Titre : Aboveground biomass estimation of an agro-pastoral ecology in semi-arid Bundelkhand region of India from Landsat data: a comparison of support vector machine and traditional regression models Type de document : Article/Communication Auteurs : Dibyendu Deb, Auteur ; Shovik Deb, Auteur ; Debasis Chakraborty, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1043 - 1058 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] biomasse aérienne
[Termes IGN] distribution spatiale
[Termes IGN] image Landsat-8
[Termes IGN] Inde
[Termes IGN] indice de végétation
[Termes IGN] modèle de régression
[Termes IGN] point d'appui
[Termes IGN] régression linéaire
[Termes IGN] régression multiple
[Termes IGN] séparateur à vaste marge
[Termes IGN] zone semi-arideRésumé : (auteur) This study compared the traditional regression models and support vector machine (SVM) for estimation of aboveground biomass (ABG) of an agro-pastoral ecology using vegetation indices derived from Landsat 8 satellite data as explanatory variables . The area falls in the Shivpuri Tehsil of Madhya Pradesh, India, which is predominantly a semi-arid tract of the Bundelkhand region. The Enhanced Vegetation Index-1 (EVI-1) was identified as the most suitable input variable for the regression models, although the collective effect of a number of the vegetation indices was evident. The EVI-1 was also the most suitable input variable to SVM, due to its capacity to distinctly differentiate diverse vegetation classes. The performance of SVM was better over regression models for estimation of the AGB. Based on the SVM-derived and the ground observations, the AGB of the area was precisely mapped for croplands, grassland and rangelands over the entire region. Numéro de notice : A2022-394 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1756461 Date de publication en ligne : 29/04/2020 En ligne : https://doi.org/10.1080/10106049.2020.1756461 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100688
in Geocarto international > vol 37 n° 4 [15/02/2022] . - pp 1043 - 1058[article]Comparing methods to extract crop height and estimate crop coefficient from UAV imagery using structure from motion / Nitzan Malachy in Remote sensing, vol 14 n° 4 (February-2 2022)
PermalinkMulti-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)
PermalinkA national fuel type mapping method improvement using sentinel-2 satellite data / Alexandra Stefanidou in Geocarto international, vol 37 n° 4 ([15/02/2022])
PermalinkSimulation of future forest and land use/cover changes (2019–2039) using the cellular automata-Markov model / Hasan Aksoy in Geocarto international, vol 37 n° 4 ([15/02/2022])
PermalinkAn open science and open data approach for the statistically robust estimation of forest disturbance areas / Saverio Francini in International journal of applied Earth observation and geoinformation, vol 106 (February 2022)
PermalinkApplications and challenges of GRACE and GRACE follow-on satellite gravimetry / Jianli Chen in Surveys in Geophysics, vol 43 n° 1 (February 2022)
PermalinkBuilding footprint extraction in Yangon city from monocular optical satellite image using deep learning / Hein Thura Aung in Geocarto international, vol 37 n° 3 ([01/02/2022])
PermalinkA combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)
PermalinkDecision fusion of deep learning and shallow learning for marine oil spill detection / Junfang Yang in Remote sensing, vol 14 n° 3 (February-1 2022)
PermalinkDetection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)
Permalink