Descripteur



Etendre la recherche sur niveau(x) vers le bas
Extraction of impervious surface using Sentinel-1A time-series coherence images with the aid of a Sentinel-2A image / Wenfu Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 3 (March 2021)
![]()
[article]
Titre : Extraction of impervious surface using Sentinel-1A time-series coherence images with the aid of a Sentinel-2A image Type de document : Article/Communication Auteurs : Wenfu Wu, Auteur ; Jiahua Teng, Auteur ; Qimin Cheng, Auteur ; Songjing Guo, Auteur Année de publication : 2021 Article en page(s) : pp 161-170 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes descripteurs IGN] chatoiement
[Termes descripteurs IGN] cohérence
[Termes descripteurs IGN] cohérence temporelle
[Termes descripteurs IGN] extraction automatique
[Termes descripteurs IGN] image Sentinel-MSI
[Termes descripteurs IGN] image Sentinel-SAR
[Termes descripteurs IGN] segmentation d'image
[Termes descripteurs IGN] segmentation multi-échelle
[Termes descripteurs IGN] série temporelle
[Termes descripteurs IGN] surface imperméableRésumé : (Auteur) The continuous increasing of impervious surface (IS) hinders the sustainable development of cities. Using optical images alone to extract IS is usually limited by weather, which obliges us to develop new data sources. The obvious differences between natural and artificial targets in interferometric synthetic-aperture radar coherence images have attracted the attention of researchers. A few studies have attempted to use coherence images to extract IS—mostly single-temporal coherence images, which are affected by de-coherence factors. And due to speckle, the results are rather fragmented. In this study, we used time-series coherence images and introduced multi-resolution segmentation as a postprocessing step to extract IS. From our experiments, the results from the proposed method were more complete and achieved considerable accuracy, confirming the potential of time-series coherence images for extracting IS. Numéro de notice : A2021-240 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.3.161 date de publication en ligne : 01/03/2021 En ligne : https://doi.org/10.14358/PERS.87.3.161 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97264
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 3 (March 2021) . - pp 161-170[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021031 SL Revue Centre de documentation Revues en salle Disponible A temporal phase coherence estimation algorithm and its application on DInSAR pixel selection / Feng Zhao in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
![]()
[article]
Titre : A temporal phase coherence estimation algorithm and its application on DInSAR pixel selection Type de document : Article/Communication Auteurs : Feng Zhao, Auteur ; Jordi J. Mallorquí, Auteur Année de publication : 2019 Article en page(s) : pp 8350 - 8361 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes descripteurs IGN] amplitude
[Termes descripteurs IGN] approche pixel
[Termes descripteurs IGN] Barcelone
[Termes descripteurs IGN] cohérence temporelle
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] image Radarsat
[Termes descripteurs IGN] interferométrie différentielle
[Termes descripteurs IGN] mesurage de phaseRésumé : (auteur) Pixel selection is a crucial step of all advanced Differential Interferometric Synthetic Aperture Radar (DInSAR) techniques that have a direct impact on the quality of the final DInSAR products. In this paper, a full-resolution phase quality estimator, i.e., the temporal phase coherence (TPC), is proposed for DInSAR pixel selection. The method is able to work with both distributed scatterers (DSs) and permanent scatterers (PSs). The influence of different neighboring window sizes and types of interferograms combinations [both the single-master (SM) and the multi-master (MM)] on TPC has been studied. The relationship between TPC and phase standard deviation (STD) of the selected pixels has also been derived. Together with the classical coherence and amplitude dispersion methods, the TPC pixel selection algorithm has been tested on 37 VV polarization Radarsat-2 images of Barcelona Airport. Results show the feasibility and effectiveness of TPC pixel selection algorithm. Besides obvious improvements in the number of selected pixels, the new method shows some other advantages comparing with the other classical two. The proposed pixel selection algorithm, which presents an affordable computational cost, is easy to be implemented and incorporated into any advanced DInSAR processing chain for high-quality pixels' identification. Numéro de notice : A2019-593 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2920536 date de publication en ligne : 16/07/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2920536 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94585
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 11 (November 2019) . - pp 8350 - 8361[article]Learning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)
![]()
[article]
Titre : Learning to segment moving objects Type de document : Article/Communication Auteurs : Pavel Tokmakov, Auteur ; Cordelia Schmid, Auteur ; Karteek Alahari, Auteur Année de publication : 2019 Article en page(s) : pp 282 - 301 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] cohérence temporelle
[Termes descripteurs IGN] image vidéo
[Termes descripteurs IGN] mouvement
[Termes descripteurs IGN] objet mobile
[Termes descripteurs IGN] reconnaissance d'objets
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] séquence d'imagesRésumé : (Auteur) We study the problem of segmenting moving objects in unconstrained videos. Given a video, the task is to segment all the objects that exhibit independent motion in at least one frame. We formulate this as a learning problem and design our framework with three cues: (1) independent object motion between a pair of frames, which complements object recognition, (2) object appearance, which helps to correct errors in motion estimation, and (3) temporal consistency, which imposes additional constraints on the segmentation. The framework is a two-stream neural network with an explicit memory module. The two streams encode appearance and motion cues in a video sequence respectively, while the memory module captures the evolution of objects over time, exploiting the temporal consistency. The motion stream is a convolutional neural network trained on synthetic videos to segment independently moving objects in the optical flow field. The module to build a “visual memory” in video, i.e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences. For every pixel in a frame of a test video, our approach assigns an object or background label based on the learned spatio-temporal features as well as the “visual memory” specific to the video. We evaluate our method extensively on three benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and SegTrack. In addition, we provide an extensive ablation study to investigate both the choice of the training data and the influence of each component in the proposed framework. Numéro de notice : A2018-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1122-2 date de publication en ligne : 22/09/2018 En ligne : https://doi.org/10.1007/s11263-018-1122-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92528
in International journal of computer vision > vol 127 n° 3 (March 2019) . - pp 282 - 301[article]