Descripteur
Termes IGN > sciences naturelles > physique > cohérence (physique) > cohérence temporelle
cohérence temporelle |
Documents disponibles dans cette catégorie (7)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Development of the GLASS 250-m leaf area index product (version 6) from MODIS data using the bidirectional LSTM deep learning model / Han Ma in Remote sensing of environment, vol 273 (May 2022)
[article]
Titre : Development of the GLASS 250-m leaf area index product (version 6) from MODIS data using the bidirectional LSTM deep learning model Type de document : Article/Communication Auteurs : Han Ma, Auteur ; Shunlin Liang, Auteur Année de publication : 2022 Article en page(s) : n° 112985 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] image Terra-MODIS
[Termes IGN] Leaf Area Index
[Termes IGN] réflectance de surface
[Termes IGN] régression
[Termes IGN] série temporelle
[Termes IGN] surveillance de la végétationRésumé : (auteur) Leaf area index (LAI) is a terrestrial essential climate variable that is required in a variety of ecosystem and climate models. The Global LAnd Surface Satellite (GLASS) LAI product has been widely used, but its current version (V5) from Moderate Resolution Imaging Spectroradiometer (MODIS) data has several limitations, such as frequent temporal fluctuation, large data gaps, high dependence on the quality of surface reflectance, and low computational efficiency. To address these issues, this paper presents a deep learning model to generate a new version of the LAI product (V6) at 250-m resolution from MODIS data from 2000 onward. Unlike most existing algorithms that estimate one LAI value at one time for each pixel, this model estimates LAI for 2 years simultaneously. Three widely used LAI products (MODIS C6, GLASS V5, and PROBA-V V1) are used to generate global representative time-series LAI training samples using K-means clustering analysis and least difference criteria. We explore four machine learning models, the general regression neural network (GRNN), long short-term memory (LSTM), gated recurrent unit (GRU), and Bidirectional LSTM (Bi-LSTM), and identify Bi-LSTM as the best model for product generation. This new product is directly validated using 79 high-resolution LAI reference maps from three in situ observation networks. The results show that GLASS V6 LAI achieves higher accuracy, with a root mean square (RMSE) of 0.92 at 250 m and 0.86 at 500 m, while the RMSE is 0.98 for PROBA-V at 300 m, 1.08 for GLASS V5, and 0.95 for MODIS C6 both at 500 m. Spatial and temporal consistency analyses also demonstrate that the GLASS V6 LAI product is more spatiotemporally continuous and has higher quality in terms of presenting more realistic temporal LAI dynamics when the surface reflectance is absent for a long period owing to persistent cloud/aerosol contaminations. The results indicate that the new Bi-LSTM deep learning model runs significantly faster than the GLASS V5 algorithm, avoids the reconstruction of surface reflectance data, and is resistant to the noises (cloud and snow contamination) or missing values contained in surface reflectance than other methods, as the Bi-LSTM can effectively extract information across the entire time series of surface reflectance rather than a single time point. To our knowledge, this is the first global time-series LAI product at the 250-m spatial resolution that is freely available to the public (www.geodata.cn and www.glass.umd.edu). Numéro de notice : A2022-284 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.112985 Date de publication en ligne : 10/03/2022 En ligne : https://doi.org/10.1016/j.rse.2022.112985 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100303
in Remote sensing of environment > vol 273 (May 2022) . - n° 112985[article]
Titre : Learning to represent and reconstruct 3D deformable objects Type de document : Thèse/HDR Auteurs : Jan Bednarik, Auteur ; Pascal Fua, Directeur de thèse ; M. Salzmann, Directeur de thèse Editeur : Lausanne : Ecole Polytechnique Fédérale de Lausanne EPFL Année de publication : 2022 Importance : 138 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur ès Sciences, Ecole Polytechnique Fédérale de LausanneLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de formes
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] déformation de surface
[Termes IGN] distorsion d'image
[Termes IGN] géométrie de Riemann
[Termes IGN] image 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] semis de points
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Representing and reconstructing 3D deformable shapes are two tightly linked problems that have long been studied within the computer vision field. Deformable shapes are truly ubiquitous in the real world, whether be it specific object classes such as humans, garments and animals or more abstract ones such as generic materials deforming under stress caused by an external force. Truly practical computer vision algorithms must be able to understand the shapes of objects in the observed scenes to unlock the wide spectrum of much sought after applications ranging from virtual try-on to automated surgeries. Automatic shape reconstruction, however, is known to be an ill-posed problem, especially in the common scenario of a single image input. Therefore, the modern approaches rely on deep learning paradigm which has proven to be extremely effective even for the severely under-constrained computer vision problems. We, too, exploit the success of data-driven approaches, however, we also show that generic deep learning models can greatly benefit from being combined with explicit knowledge originating in traditional computational geometry. We analyze the use of various 3D shape representations for deformable object reconstruction and we distinctly focus on one of them, the atlas-based representation, which turns out to be especially suitable for modeling deformable shapes and which we further improve and extend to yield higher quality reconstructions. The atlas-based representation models the surfaces as an ensemble of continuous functions and thus allows for arbitrary resolution and analytical surface analysis. We identify major shortcomings of the base formulation, namely the infamous phenomena of patch collapse, patch overlap and arbitrarily strong mapping distortions, and we propose novel regularizers based on analytically computed properties of the reconstructed surfaces. Our approach counteracts the aforementioned drawbacks while yielding higher reconstruction accuracy in terms of surface normals on the tasks of single view-reconstruction, shape completion and point cloud auto-encoding. We dive into the problematics of atlas-based shape representation even deeper and focus on another pressing design flaw, the global inconsistency among the individual mappings. While the inconsistency is not reflected in the traditional reconstruction accuracy quantitative metrics, it is detrimental to the visual quality of the reconstructed surfaces. Specifically, we design loss functions encouraging intercommunication among the individual mappings which pushes the resulting surface towards a C1 smooth function. Our experiments on the tasks of single-view reconstruction and point cloud auto-encoding reveal that our method significantly improves the visual quality when compared to the baselines. Furthermore, we adapt the atlas-based representation and the related training procedure so that it could model a full sequence of a deforming object in a temporally-consistent way. In other words, the goal is to produce such reconstruction where each surface point always represents the same semantic point on the target ground-truth surface. To achieve such behavior, we note that if each surface point deforms close-to-isometrically, its semantic location likely remains unchanged. Practically, we make use of the Riemannian metric which is computed analytically on the surfaces, and force it to remain point-wise constant throughout the sequence. Our experimental results reveal that our method yields state-of-the-art results on the task of unsupervised dense shape correspondence estimation, while also improving the visual reconstruction quality. Finally, we look into a particular problem of monocular texture-less deformable shape reconstruction, an instance of the Shape-from-Shading problem. We propose a multi-task learning approach which takes an RGB image of an unknown object as the input and jointly produces a normal map, a depth map and a mesh corresponding to the observed part of the surface. We show that forcing the model to produce multiple different 3D representations of the same objects results in higher reconstruction quality. To train the network, we acquire a large real-world annotated dataset of texture-less deforming objects and we release it for public use. Finally, we prove through experiments that our approach outperforms a previous optimization based method on the single-view-reconstruction task. Note de contenu : 1- Introduction
2- Related work
3- Atlas-based representation for deformable shape reconstruction
4- Shape reconstruction by learning differentiable surface representations
5- Better patch stitching for parametric surface reconstruction
6- Temporally-consistent surface reconstruction using metrically-consistent atlases
7- Learning to reconstruct texture-less deformable surfaces from a single view
8- ConclusionNuméro de notice : 15761 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : Thèse de Doctorat : Sciences : Lausanne, EPFL : 2022 DOI : 10.5075/epfl-thesis-7974 En ligne : https://doi.org/10.5075/epfl-thesis-7974 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100958 Extraction of impervious surface using Sentinel-1A time-series coherence images with the aid of a Sentinel-2A image / Wenfu Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 3 (March 2021)
[article]
Titre : Extraction of impervious surface using Sentinel-1A time-series coherence images with the aid of a Sentinel-2A image Type de document : Article/Communication Auteurs : Wenfu Wu, Auteur ; Jiahua Teng, Auteur ; Qimin Cheng, Auteur ; Songjing Guo, Auteur Année de publication : 2021 Article en page(s) : pp 161-170 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] chatoiement
[Termes IGN] cohérence (physique)
[Termes IGN] cohérence temporelle
[Termes IGN] extraction automatique
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] segmentation d'image
[Termes IGN] segmentation multi-échelle
[Termes IGN] série temporelle
[Termes IGN] surface imperméableRésumé : (Auteur) The continuous increasing of impervious surface (IS) hinders the sustainable development of cities. Using optical images alone to extract IS is usually limited by weather, which obliges us to develop new data sources. The obvious differences between natural and artificial targets in interferometric synthetic-aperture radar coherence images have attracted the attention of researchers. A few studies have attempted to use coherence images to extract IS—mostly single-temporal coherence images, which are affected by de-coherence factors. And due to speckle, the results are rather fragmented. In this study, we used time-series coherence images and introduced multi-resolution segmentation as a postprocessing step to extract IS. From our experiments, the results from the proposed method were more complete and achieved considerable accuracy, confirming the potential of time-series coherence images for extracting IS. Numéro de notice : A2021-240 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.3.161 Date de publication en ligne : 01/03/2021 En ligne : https://doi.org/10.14358/PERS.87.3.161 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97264
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 3 (March 2021) . - pp 161-170[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021031 SL Revue Centre de documentation Revues en salle Disponible Diurnal cycles of C-band temporal coherence and backscattering coefficient over an olive orchard in a semi-arid area: Comparison of in situ and Sentinel-1 radar observations / Adnane Chakir (2021)
Titre : Diurnal cycles of C-band temporal coherence and backscattering coefficient over an olive orchard in a semi-arid area: Comparison of in situ and Sentinel-1 radar observations Type de document : Article/Communication Auteurs : Adnane Chakir , Auteur ; Pierre-Louis Frison , Auteur ; Saïd Khabba, Auteur ; Jamal Ezzahar, Auteur ; Ludovic Villard, Auteur ; Pascal Fanise, Auteur ; Nadia Ouaadi, Auteur ; V. Ledantec, Auteur ; Lionel Jarlan, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2021 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : IGARSS 2021, IEEE International Geoscience And Remote Sensing Symposium 11/07/2021 16/07/2021 Bruxelles Belgique Proceedings IEEE Importance : pp 3801 - 3804 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] bande C
[Termes IGN] coefficient de rétrodiffusion
[Termes IGN] cohérence temporelle
[Termes IGN] image Sentinel-SAR
[Termes IGN] Maroc
[Termes IGN] Olea europaea
[Termes IGN] vergerRésumé : (auteur) C-band radar remote sensing is a suitable tool for monitoring agricultural areas on a large scale, providing access to information on vegetation such as plant biomass, or on the surface water content of the soil. Recent studies suggest that the water state and the physiological functioning of trees influence radar response leading to marked daily profiles of both radar backscattering coefficient and temporal coherence. The objective of this paper is to make a preliminary comparison between the temporal evolution of Sentinel-1 radar data and in situ radar measurements over a Mediterranean olive orchard located in Morocco. In situ radar data consist in quad polarizations measurements realized from a 20m high tower, every 15 minutes, for the period extending from May 2019 to October 2020. Numéro de notice : C2021-051 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/IGARSS47720.2021.9553129 Date de publication en ligne : 12/10/2021 En ligne : https://doi.org/10.1109/IGARSS47720.2021.9553129 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99415 Diurnal cycles of C-band temporal coherence and backscattering coefficient over a wheat field in a semi-arid area / Nadia Ouaadi (2021)
Titre : Diurnal cycles of C-band temporal coherence and backscattering coefficient over a wheat field in a semi-arid area Type de document : Article/Communication Auteurs : Nadia Ouaadi, Auteur ; Ludovic Villard, Auteur ; Jamal Ezzahar, Auteur ; Pierre-Louis Frison , Auteur ; Saïd Khabba, Auteur ; Mohamed Kasbani, Auteur ; Pascal Fanise, Auteur ; Adnane Chakir , Auteur ; et al., Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2021 Projets : 2-Pas d'info accessible - article non ouvert / Conférence : IGARSS 2021, IEEE International Geoscience And Remote Sensing Symposium 11/07/2021 16/07/2021 Bruxelles Belgique Proceedings IEEE Importance : pp 3817 - 3820 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] bande C
[Termes IGN] blé (céréale)
[Termes IGN] coefficient de rétrodiffusion
[Termes IGN] cohérence temporelle
[Termes IGN] évapotranspiration
[Termes IGN] humidité du sol
[Termes IGN] image Sentinel-SAR
[Termes IGN] Maroc
[Termes IGN] surface cultivée
[Termes IGN] ventRésumé : (auteur) C-band radar observations are well known to have great potentials for the monitoring of crop hydric conditions. Recent studies suggested that the observed difference of backscattering coefficient between ascending and descending pass over tropical forest could be related to the physiological functioning of the trees. Likewise, the water movement within annual crops could lead to a daily cycle of both σo and temporal coherence. The objective of this paper is to present the preliminary results of an experiment carried out on a winter wheat field in Morocco that was instrumented with six C-band antennas during 2020 growing season. The preliminary results showed strong daily cycles of ρ and σo that are analyzed in relation to wind speed, surface soil moisture and evapotranspiration. This work open insights for the monitoring of the crop hydric status using C-band radar data acquired by Sentinel-1 and by potential future radar geostationary missions. Numéro de notice : C2021-052 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/IGARSS47720.2021.9553586 Date de publication en ligne : 12/10/2021 En ligne : https://doi.org/10.1109/IGARSS47720.2021.9553586 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99416 A temporal phase coherence estimation algorithm and its application on DInSAR pixel selection / Feng Zhao in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)PermalinkLearning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)Permalink