Descripteur
Documents disponibles dans cette catégorie (158)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Software comparison for underwater archaeological photogrammetric applications / Marinos Vlachos (2019)
Titre : Software comparison for underwater archaeological photogrammetric applications Type de document : Article/Communication Auteurs : Marinos Vlachos, Auteur ; Louise Berger, Auteur ; Rose Mathelier, Auteur ; P. Agrafiotis, Auteur ; Dimitrios Skarlatos, Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2019 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 42-2/W15 Projets : 3-projet - voir note / Conférence : CIPA 2019, 27th CIPA International Symposium, Documenting the past for a better future 01/09/2019 05/09/2019 Ávila Espagne OA ISPRS Archives Importance : 7 p. Note générale : bibliographie
The contribution of M. Vlachos, P. Agrafiotis and D. Skarlatos is part of iMARECULTURE project (Advanced VR, iMmersive Serious Games and Augmented REality as Tools to Raise Awareness and Access to European Underwater CULTURal heritagE, Digital Heritage) that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 727153.Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] analyse comparative
[Termes IGN] données localisées 3D
[Termes IGN] logiciel de photogrammétrie
[Termes IGN] scène sous-marine
[Termes IGN] semis de points
[Termes IGN] structure-from-motionRésumé : (auteur) This paper presents an investigation as to whether and how the selection of the SfM-MVS software affects the 3D reconstruction of submerged archaeological sites. Specifically, Agisoft Photoscan, VisualSFM, SURE, 3D Zephyr and Reality Capture software were used and evaluated according to their performance in 3D reconstruction using specific metrics over the reconstructed underwater scenes. It must be clarified that the scope of this study is not to evaluate specific algorithms or steps that the various software use, but to evaluate the final results and specifically the generated 3D point clouds. To address the above research issues, a dataset from the ancient shipwreck, laying at 45 meters below sea level, is used. The dataset is composed of 19 images having very small camera to object distance (1 meter), and 42 images with higher camera to object distance (3 meters) images. Using a common bundle adjustment for all 61 images, a reference point cloud resulted from the lower dataset is used to compare it with the point clouds of the higher dataset generated using the different photogrammetric packages. Following that, a comparison regarding the number of total points, cloud to cloud distances, surface roughness, surface density and a combined 3D metric was done to evaluate and see which one performed the best. Numéro de notice : C2019-074 Affiliation des auteurs : ENSG+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLII-2-W15-1195-2019 Date de publication en ligne : 26/08/2019 En ligne : https://doi.org/10.5194/isprs-archives-XLII-2-W15-1195-2019 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99700 Towards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data / Citlalli Gamez Serna (2019)
Titre : Towards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data Type de document : Thèse/HDR Auteurs : Citlalli Gamez Serna, Auteur ; Yassine Ruichek, Directeur de thèse Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 178 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université Bourgogne Franche-Comté préparée à l'Université de Technologie de Belfort-Montbéliard, InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] instance
[Termes IGN] milieu urbain
[Termes IGN] navigation autonome
[Termes IGN] récepteur GPS
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] signalisation routière
[Termes IGN] système de transport intelligent
[Termes IGN] trajectoire (véhicule non spatial)
[Termes IGN] véhicule sans pilote
[Termes IGN] vision par ordinateur
[Termes IGN] vision stéréoscopique
[Termes IGN] vitesseMots-clés libres : suivi d'itinéraire Index. décimale : THESE Thèses et HDR Résumé : (auteur) This PhD thesis focuses on developing a path tracking approach based on visual perception and localization in urban environments. The proposed approach comprises two systems. The first one concerns environment perception. This task is carried out using deep learning techniques to automatically extract 2D visual features and use them to learn in order to distinguish the different objects in the driving scenarios. Three deep learning techniques are adopted: semantic segmentation to assign each image pixel to a class, instance segmentation to identify separated instances of the same class and, image classification to further recognize the specific labels of the instances. Here our system segments 15 object classes and performs traffic sign recognition. The second system refers to path tracking. In order to follow a path, the equipped vehicle first travels and records the route with a stereo vision system and a GPS receiver (learning step). The proposed system analyses off-line the GPS path and identifies exactly the locations of dangerous (sharp) curves and speed limits. Later after the vehicle is able to localize itself, the vehicle control module together with our speed negotiation algorithm, takes into account the information extracted and computes the ideal speed to execute. Through experimental results of both systems, we prove that, the first one is capable to detect and recognize precisely objects of interest in urban scenarios, while the path tracking one reduces significantly the lateral errors between the learned and traveled path. We argue that the fusion of both systems will ameliorate the tracking approach for preventing accidents or implementing autonomous driving. Note de contenu : I- Context and problems
1- Introduction
II- Contribution
2- Proposed datasets
3- Traffic sign classification
4- Visual perception system for urban environments
5- Dynamic speed adaptation system for path tracking based on curvature
information and speed limits
III- Conclusions and future works
6- Conclusions and future worksNuméro de notice : 25967 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : UBFC : 2019 Organisme de stage : CIAD Dijon nature-HAL : Thèse DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-02160966/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96587 Automatic building rooftop extraction from aerial images via hierarchical RGB-D priors / Shibiao Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Automatic building rooftop extraction from aerial images via hierarchical RGB-D priors Type de document : Article/Communication Auteurs : Shibiao Xu, Auteur ; Xingjia Pan, Auteur ; Er Li, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 7369 - 7387 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] détection du bâti
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] itération
[Termes IGN] scène urbaine
[Termes IGN] segmentation d'image
[Termes IGN] segmentation hiérarchique
[Termes IGN] toit
[Termes IGN] zone saillante 3DRésumé : (auteur) Accurate building rooftop extraction from high-resolution aerial images is of crucial importance in a wide range of applications. Owing to the varying appearance and large-scale range of scene objects, especially for building rooftops in different scales and heights, single-scale or individual prior-based extraction technique is insufficient in pursuing efficient, generic, and accurate extraction results. The trend toward integrating multiscale or several cue techniques appears to be the best way; thus, such integration is the focus of this paper. We first propose a novel salient rooftop detector integrating four correlative RGB-D priors (depth cue, uniqueness prior, shape prior, and transition surface prior) for improved rooftop extraction to address the preceding complex issues mentioned. Then, these correlative cues are computed from image layers created by our multilevel segmentation and further fused into the state-of-the-art high-order conditional random field (CRF) framework to locate the rooftop. Finally, an iterative optimization strategy is applied for high-quality solving, which can robustly handle varying appearance of building rooftops. Performance evaluations in the SZTAKI-INRIA benchmark data sets show that our method outperforms the traditional color-based algorithm and the original high-order CRF algorithm and its variants. The proposed algorithm is also evaluated and found to produce consistently satisfactory results for various large-scale, real-world data sets. Numéro de notice : A2018-558 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2850972 Date de publication en ligne : 26/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2850972 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91664
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7369 - 7387[article]Remote sensing scene classification using multilayer stacked covariance pooling / Nanjun He in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Remote sensing scene classification using multilayer stacked covariance pooling Type de document : Article/Communication Auteurs : Nanjun He, Auteur ; Leyuan Fang, Auteur ; Shutao Li, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 6899 - 6910 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] matrice de covariance
[Termes IGN] représentation cartographique
[Termes IGN] scèneRésumé : (auteur) This paper proposes a new method, called multilayer stacked covariance pooling (MSCP), for remote sensing scene classification. The innovative contribution of the proposed method is that it is able to naturally combine multilayer feature maps, obtained by pretrained convolutional neural network (CNN) models. Specifically, the proposed MSCP-based classification framework consists of the following three steps. First, a pretrained CNN model is used to extract multilayer feature maps. Then, the feature maps are stacked together, and a covariance matrix is calculated for the stacked features. Each entry of the resulting covariance matrix stands for the covariance of two different feature maps, which provides a natural and innovative way to exploit the complementary information provided by feature maps coming from different layers. Finally, the extracted covariance matrices are used as features for classification by a support vector machine. The experimental results, conducted on three challenging data sets, demonstrate that the proposed MSCP method can not only consistently outperform the corresponding single-layer model but also achieve better classification performance than other pretrained CNN-based scene classification methods. Numéro de notice : A2018-552 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2845668 Date de publication en ligne : 09/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2845668 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91640
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 6899 - 6910[article]A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification Type de document : Article/Communication Auteurs : Wei Han, Auteur ; Ruyi Feng, Auteur ; Lizhe Wang, Auteur ; Yafan Cheng, Auteur Année de publication : 2018 Article en page(s) : pp 23 - 43 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] classification semi-dirigée
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scèneRésumé : (Auteur) High resolution remote sensing (HRRS) image scene classification plays a crucial role in a wide range of applications and has been receiving significant attention. Recently, remarkable efforts have been made to develop a variety of approaches for HRRS scene classification, wherein deep-learning-based methods have achieved considerable performance in comparison with state-of-the-art methods. However, the deep-learning-based methods have faced a severe limitation that a great number of manually-annotated HRRS samples are needed to obtain a reliable model. However, there are still not sufficient annotation datasets in the field of remote sensing. In addition, it is a challenge to get a large scale HRRS image dataset due to the abundant diversities and variations in HRRS images. In order to address the problem, we propose a semi-supervised generative framework (SSGF), which combines the deep learning features, a self-label technique, and a discriminative evaluation method to complete the task of scene classification and annotating datasets. On this basis, we further develop an extended algorithm (SSGA-E) and evaluate it by exclusive experiments. The experimental results show that the SSGA-E outperforms most of the fully-supervised methods and semi-supervised methods. It has achieved the third best accuracy on the UCM dataset, the second best accuracy on the WHU-RS, the NWPU-RESISC45, and the AID datasets. The impressive results demonstrate that the proposed SSGF and the extended method is effective to solve the problem of lacking an annotated HRRS dataset, which can learn valuable information from unlabeled samples to improve classification ability and obtain a reliable annotation dataset for supervised learning. Numéro de notice : A2018-489 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.004 Date de publication en ligne : 14/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91225
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 23 - 43[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt The design and testing of 3DmoveR: an experimental tool for usability studies of interactive 3D maps / Lukas Herman in Cartographic perspectives, n° 90 ([01/10/2018])PermalinkAugmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkFusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning / Rui Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 143 (September 2018)PermalinkA deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification / Zhen Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)PermalinkRobust detection and affine rectification of planar homogeneous texture for scene understanding / Shahzor Ahmad in International journal of computer vision, vol 126 n° 8 (August 2018)PermalinkUsing UAVs for map creation and updating: A case study in Rwanda / Mila Koeva in Survey review, vol 50 n° 361 (July 2018)PermalinkA voxel- and graph-based strategy for segmenting man-made infrastructures using perceptual grouping laws: comparison and evaluation / Yusheng Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)PermalinkSensor-topology based simplicial complex reconstruction from mobile laser scanning / Stéphane Guinard in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2 (June 2018)PermalinkLarge-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net / Timo Hackel in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 5 (mai 2018)PermalinkRevue des descripteurs tridimensionnels (3D) pour la catégorisation des nuages de points acquis avec un système LiDAR de télémétrie mobile / Sylvie Daniel in Geomatica, vol 72 n° 1 (March 2018)Permalink