Descripteur
Documents disponibles dans cette catégorie (1635)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Towards a polyalgorithm for land use change detection / Rishu Saxena in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)
[article]
Titre : Towards a polyalgorithm for land use change detection Type de document : Article/Communication Auteurs : Rishu Saxena, Auteur ; Layne T. Watson, Auteur ; Randolph H. Wynne, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 217 - 234 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse comparative
[Termes IGN] changement d'occupation du sol
[Termes IGN] détection de changement
[Termes IGN] série temporelleMots-clés libres : EWMACD Exponentially weighted moving average change detection LandTrendR Résumé : (Auteur) One way of analyzing satellite images for land use and land cover change (LULCC) is time series analysis (TSA). Most of the many TSA based LULCC algorithms proposed in the remote sensing community perform well on datasets for which they were designed, but their performance on randomly chosen datasets from across the globe has not been studied. A polyalgorithm combines several basic algorithms, each meant to solve the same problem, producing a strategy that unites the strengths and circumvents the weaknesses of constituent algorithms. The foundation of the proposed TSA based ‘polyalgorithm’ for LULCC is three algorithms (BFAST, EWMACD, and LandTrendR), precisely described mathematically, and chosen to be fundamentally distinct from each other in design and in the phenomena they capture. Analysis of results representing success, failure, and parameter sensitivity for each algorithm is presented. For a given pixel, Hausdorff distance is used to compare the distance between the change times (breakpoints) obtained from two different algorithms. Timesync validation data, a dataset that is based on human interpretation of Landsat time series in concert with historical aerial photography, is used for validation. The polyalgorithm yields more accurate results than EWMACD and LandTrendR alone, but counterintuitively not better than BFAST alone. This nascent work will be directly useful in land use and land cover change studies, of interest to terrestrial science research, especially regarding anthropogenic impacts on the environment. Numéro de notice : A2018-401 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.07.002 Date de publication en ligne : 27/07/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.07.002 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90832
in ISPRS Journal of photogrammetry and remote sensing > vol 144 (October 2018) . - pp 217 - 234[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018103 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018102 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Augmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Augmented reality meets computer vision : efficient data generation for urban driving scenes Type de document : Article/Communication Auteurs : Hassan Abu Alhaija, Auteur ; Siva Karthik Mustikovela, Auteur ; Lars Mescheder, Auteur ; Andreas Geiger, Auteur ; Carsten Rother, Auteur Année de publication : 2018 Article en page(s) : pp 961 - 972 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] détection d'objet
[Termes IGN] réalité augmentée
[Termes IGN] scène urbaine
[Termes IGN] vision par ordinateurRésumé : (Auteur) The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data. Numéro de notice : A2018-417 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1070-x Date de publication en ligne : 07/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1070-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90900
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 961 - 972[article]Image-based synthesis for deep 3D human pose estimation / Grégory Rogez in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Image-based synthesis for deep 3D human pose estimation Type de document : Article/Communication Auteurs : Grégory Rogez, Auteur ; Cordelia Schmid, Auteur Année de publication : 2018 Article en page(s) : pp 993 - 1008 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] données localisées 3D
[Termes IGN] estimation de pose
[Termes IGN] réseau neuronal convolutif
[Termes IGN] synthèse d'imageRésumé : (Auteur) This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D motion capture data. Given a candidate 3D pose, our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms most of the published works in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for real-world images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images. Compared to data generated from more classical rendering engines, our synthetic images do not require any domain adaptation or fine-tuning stage. Numéro de notice : A2018-418 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1071-9 Date de publication en ligne : 19/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1071-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90901
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 993 - 1008[article]Integration of ZY3-02 satellite laser altimetry data and stereo images for high-accuracy mapping / Guoyuan Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 9 (September 2018)
[article]
Titre : Integration of ZY3-02 satellite laser altimetry data and stereo images for high-accuracy mapping Type de document : Article/Communication Auteurs : Guoyuan Li, Auteur ; Xinming Tang, Auteur ; Xiaoming Gao, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 569 - 578 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] Chine
[Termes IGN] données altimétriques
[Termes IGN] données ICEsat
[Termes IGN] image ZiYuan-3
[Termes IGN] modèle par fonctions rationnelles
[Termes IGN] ZiYuan-3Résumé : (Auteur) Integration of satellite laser altimetry data and stereo images without ground control points (GCPs) is an attractive method for global mapping. In this paper, we propose a new strategy of integrating Ziyuan3-02 (ZY3-02) satellite stereo images and laser altimetry data using a rigorous sensor model (RSM) with laser ranging constraint under the synchronized and rational function model (RFM) with laser elevation constraint under the non-synchronized capture for high-accuracy mapping without GCPs. Four experimental regions in China are selected to validate the method. The results show that the ZY3-02 satellite laser altimetry data can be used to improve the elevation accuracy of stereo images to better than 3.0 m without GCPs. All of the conclusions are valuable for the development of China's next generation of surveying and mapping satellites. Numéro de notice : A2018-362 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.84.9.569 Date de publication en ligne : 01/09/2018 En ligne : https://doi.org/10.14358/PERS.84.9.569 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90673
in Photogrammetric Engineering & Remote Sensing, PERS > vol 84 n° 9 (September 2018) . - pp 569 - 578[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2018091 RAB Revue Centre de documentation En réserve L003 Disponible Research on the estimation model of vegetation water content in halophyte leaves based on the newly developed vegetation indices / Zhe Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 9 (September 2018)
[article]
Titre : Research on the estimation model of vegetation water content in halophyte leaves based on the newly developed vegetation indices Type de document : Article/Communication Auteurs : Zhe Li, Auteur ; Fei Zhang, Auteur ; Lihua Chen, Auteur ; Haiwei Zhang, Auteur ; Hsiang-Te Kung, Auteur Année de publication : 2018 Article en page(s) : pp 538 - 548 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] croissance végétale
[Termes IGN] feuille (végétation)
[Termes IGN] indice de végétation
[Termes IGN] plante halophile
[Termes IGN] Populus euphratica
[Termes IGN] signature spectrale
[Termes IGN] Sinkiang (Chine)
[Termes IGN] Tamarix (genre)
[Termes IGN] teneur en eau de la végétationRésumé : (Auteur) The vegetation water content (VWC) quantitative is useful for monitoring vegetation physiological growth. The relationship between VWC and vegetation water indices was analyzed. The optimal estimation model was established. The results show that: (1) Absorption bands primarily fell within 380 to 400 nm, 680 to 720 nm, 1420 to 1450 nm, 1900 to 1940 nm, and 2450 to 2500 nm; (2) comparing published vegetation water indices and developed vegetation indices, it showed that DVI(1712,1382), NDSI(2201,1870) and RSI(2259,1870) had a better correlation with VWC than the published vegetation water; and (3) NDSI(2201,1870) and RSI(2259,1870) performed well in estimating vegetation water content, DVI(1712,1382) had a rough estimate of its water content. Moreover, the linear combination of DVI(1712,1382), NDSI(2201,1870) and RSI(2259,1870) improved the estimation of VWC. The best vegetation indices for estimating VWC were found to be the linear combination of DVI(1712,1382), NDSI(2201,1870) and RSI(2259,1870) in arid area of northwestern China. Numéro de notice : A2018-361 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.84.9.537 Date de publication en ligne : 01/09/2018 En ligne : https://doi.org/10.14358/PERS.84.9.537 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90672
in Photogrammetric Engineering & Remote Sensing, PERS > vol 84 n° 9 (September 2018) . - pp 538 - 548[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2018091 RAB Revue Centre de documentation En réserve L003 Disponible Adaptive correlation filters with long-term and short-term memory for object tracking / Chao Ma in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkICARE-VEG: A 3D physics-based atmospheric correction method for tree shadows in urban areas / Karine R.M. Adeline in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkRobust detection and affine rectification of planar homogeneous texture for scene understanding / Shahzor Ahmad in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkThree-point-based solution for automated motion parameter estimation of a multi-camera indoor mapping system with planar motion constraint / Fangning He in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)PermalinkHierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkLabel propagation with ensemble of pairwise geometric relations : towards robust large-scale retrieval of object instances / Xiaomeng Wu in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkPredicting foreground object ambiguity and efficiently crowdsourcing the segmentation(s) / Danna Gurari in International journal of computer vision, vol 126 n° 7 (July 2018)PermalinkReal-time relative mobile target positioning using GPS-assisted stereo videogrammetry / Bahadir Ergun in Survey review, vol 50 n° 361 (July 2018)PermalinkA review of accuracy assesment for object-based image analysis: from per pixel to per-polygon approaches [review article] / Su Ye in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkApplication of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)Permalink