Descripteur
Documents disponibles dans cette catégorie (46)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
An improved constrained simultaneous iterative reconstruction technique for ionospheric tomography / Yi Bin Yao in GPS solutions, Vol 24 n° 3 (July 2020)
[article]
Titre : An improved constrained simultaneous iterative reconstruction technique for ionospheric tomography Type de document : Article/Communication Auteurs : Yi Bin Yao, Auteur ; Changzhi Zhai, Auteur ; Jian Kong, Auteur ; et al., Auteur Année de publication : 2020 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géodésie spatiale
[Termes IGN] données GNSS
[Termes IGN] interpolation
[Termes IGN] modèle ionosphérique
[Termes IGN] reconstruction 3D
[Termes IGN] teneur totale en électrons
[Termes IGN] tomographie
[Termes IGN] voxelRésumé : (auteur) Global Navigation Satellite System (GNSS) is now widely used for continuous ionospheric observations. Three-dimensional computerized ionospheric tomography (3DCIT) is an important tool for the reconstruction of electron density distributions in the ionosphere through effective use of the GNSS data. More specifically, the 3DCIT technique is able to resolve the three-dimensional electron density distributions over the reconstructed area based on the GNSS slant total electron content (STEC) observations. We present an Improved Constrained Simultaneous Iterative Reconstruction Technique (ICSIRT) algorithm that differs from the traditional ionospheric tomography methods in 3 ways. First, the ICSIRT computes the electron density corrections based on the product of the intercept and electron density within voxels so that the assignment of corrections at different heights becomes more reasonable. Second, an Inverse Distance Weighted (IDW) interpolation is used to restrict the electron density values in the voxels not traversed by GNSS rays, thereby ensuring the smoothness of the reconstructed region. Also, to improve the reconstruction accuracy around the HmF2 (the peak height of the F2 layer) altitude, a multiresolution grid is adopted in the vertical direction, with a 10-km resolution from 200 to 420 km and a 50-km resolution at other altitudes. The new algorithm has been applied to the GNSS data over the European and North American regions in different case studies that involve different seasonal conditions as well as a major storm. In the European region experiment, reconstruction results show that the new ICSIRT algorithm can effectively improve the reconstruction of the GNSS data. The electron density profiles retrieved from ICSIRT are much closer to the ionosonde observations than those from its predecessor, namely, the Constrained Simultaneous Iteration Reconstruction Technique (CSIRT). The reconstruction accuracy is significantly improved. In the North American region experiment, the electron density profiles in ICSIRT results show better agreement with incoherent scatter radar observations than CSIRT, even for the topside profiles. Numéro de notice : A2020-227 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s10291-020-00981-4 Date de publication en ligne : 18/04/2020 En ligne : https://doi.org/10.1007/s10291-020-00981-4 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94958
in GPS solutions > Vol 24 n° 3 (July 2020)[article]Estimation of tropospheric wet refractivity using tomography method and artificial neural networks in Iranian case study / Mir Reza Ghaffari Razin in GPS solutions, Vol 24 n° 3 (July 2020)
[article]
Titre : Estimation of tropospheric wet refractivity using tomography method and artificial neural networks in Iranian case study Type de document : Article/Communication Auteurs : Mir Reza Ghaffari Razin, Auteur ; Behzad Voosoghi, Auteur Année de publication : 2020 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de géodésie spatiale
[Termes IGN] coefficient de corrélation
[Termes IGN] données GPS
[Termes IGN] erreur moyenne quadratique
[Termes IGN] erreur relative
[Termes IGN] Iran
[Termes IGN] réfraction atmosphérique
[Termes IGN] réseau neuronal artificiel
[Termes IGN] retard troposphérique
[Termes IGN] retard troposphérique zénithal
[Termes IGN] tomographie par GPS
[Termes IGN] vapeur d'eau
[Termes IGN] voxelRésumé : (auteur) Using the observations from local and regional GPS networks, the estimation of slant wet delays (SWDs) is possible for each line of sight between satellite and receiver. The observations of SWD are used to model horizontal and vertical variations of the wet refractivity in the atmosphere above the study area. This work is done using the tomography method. In tomography, the horizontal variations of tropospheric wet refractivity are modeled with the polynomial in degree and rank of 2 with latitude and longitude as variables. Also, altitude variations are modeled in the form of discrete layers with constant heights. The main innovation is to estimate the tropospheric parameters for each line of sight by the artificial neural networks (ANNs). The SWD obtained from GPS observations for the different signals at each station is compared with the SWD generated by the ANNs (SWDGPS–SWDANNs). The square of the difference between these two values is introduced as the cost function in the ANNs. To evaluate, we used observations from October 27 to 31, 2011. The availability of GPS and radiosonde data is the main reason for choosing this timeframe. The correlation coefficient, root mean square error (RMSE), and relative error allow for evaluation of the proposed model. The results were also compared with the results of the voxel-based troposphere tomography method. For a more detailed evaluation, four test stations are selected and ANN zenith wet delays (ZWDANN) are compared with the ZWDGPS. Observations of test stations are not used in the modeling step. The correlation coefficient in the testing step for TomoANN and Tomovoxel is 0.9006 and 0.8863, respectively. The mean RMSE at 5 days for TomoANN and Tomovoxel is calculated as 0.63 and 0.71 mm/km, respectively. Also, the average relative error at the four test stations for TomoANN is 15.37% and for Tomovoxel it is 19.69%. The results demonstrate the better capability of the proposed method in the modeling of the tropospheric wet refractivity in the region of Iran. Numéro de notice : A2020-238 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s10291-020-00979-y Date de publication en ligne : 10/04/2020 En ligne : https://doi.org/10.1007/s10291-020-00979-y Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94986
in GPS solutions > Vol 24 n° 3 (July 2020)[article]Tree annotations in LiDAR data using point densities and convolutional neural networks / Ananya Gupta in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)
[article]
Titre : Tree annotations in LiDAR data using point densities and convolutional neural networks Type de document : Article/Communication Auteurs : Ananya Gupta, Auteur ; Jonathan Byrne, Auteur ; David Moloney, Auteur Année de publication : 2020 Article en page(s) : pp 971 - 981 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] Dublin (Irlande ; ville)
[Termes IGN] extraction d'arbres
[Termes IGN] image spectrale
[Termes IGN] Montréal (Québec)
[Termes IGN] segmentation
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxel
[Termes IGN] zone urbaineRésumé : (auteur) LiDAR provides highly accurate 3-D point clouds. However, data need to be manually labeled in order to provide subsequent useful information. Manual annotation of such data is time-consuming, tedious, and error prone, and hence, in this article, we present three automatic methods for annotating trees in LiDAR data. The first method requires high-density point clouds and uses certain LiDAR data attributes for the purpose of tree identification, achieving almost 90% accuracy. The second method uses a voxel-based 3-D convolutional neural network on low-density LiDAR data sets and is able to identify most large trees accurately but struggles with smaller ones due to the voxelization process. The third method is a scaled version of the PointNet++ method and works directly on outdoor point clouds and achieves an F score of 82.1% on the ISPRS benchmark data set, comparable to the state-of-the-art methods but with increased efficiency. Numéro de notice : A2020-095 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2942201 Date de publication en ligne : 11/10/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2942201 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94658
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 2 (February 2020) . - pp 971 - 981[article]
Titre : Deep learning for semantic feature extraction in aerial imagery Type de document : Thèse/HDR Auteurs : Ananya Gupta, Auteur ; Hujun Yin, Directeur de thèse ; Simon Watson, Directeur de thèse Editeur : Manchester [Royaume-Uni] : University of Manchester Année de publication : 2020 Importance : 151 p. Format : 21 x 30 cm Note générale : bibliographie
A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the faculty of Science and engineeringLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] cartographie d'urgence
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'arbres
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Dublin (Irlande ; ville)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] image multitemporelle
[Termes IGN] OpenStreetMap
[Termes IGN] réseau routier
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Remote sensing provides image and LiDAR data that can be useful for a number of tasks such as disaster mapping and surveying. Deep learning (DL) has been shown to provide good results in extracting knowledge from input data sources by the means of learning intermediate representation features. However, popular DL methods require large scaled datasets for training which are costly and time-consuming to obtain. This thesis investigates semantic knowledge extraction from remote sensing data using DL methods in regimes with limited labelled data. Firstly, semantic segmentation methods are compared and analysed on the task of aerial image segmentation. It is shown that pretraining on ImageNet improves the segmentation results despite the domain shift between ImageNet images and aerial images. A framework for mapping road networks in disaster struck areas is proposed. It uses pre and post disaster imagery and labels from OpenStreetMaps (OSM), forgoing the need for costly manually labelled data. Graph-based methods are used to update the pre-existing road maps from OSM. Experiments on a disaster dataset from Palu, Indonesia show the efficacy of the proposed method. A method for semantic feature extraction from aerial imagery is proposed which is shown to work well for multitemporal high resolution image registration. These feature are able to deal with temporal variations caused by seasonal changes. Methods for tree identification in LiDAR data have been proposed to overcome the need for manually labelled data. The first method works on high density point clouds and uses certain LiDAR data attributes for tree identification, achieving almost 90% accuracy. The second uses a voxel based 3D Convolutional Neural Network on low density LiDAR datasets and is able to identify most large trees. The third method is a scaled version of PointNet++ and achieves an F_score of 82.1 on the ISPRS benchmark, comparable to the state of the art methods but with increased efficiency. Finally, saliency methods used for explainability in image analysis are extended to work on 3D point clouds and voxel-based networks to help aid explainability in this area. It is shown that edge and corner features are deemed important by these networks for classification. These features are also demonstrated to be inherently sparse and pruned easily. Note de contenu : 1- Introduction
2- Background and Literature Review
3- Aerial Image Segmentation with Open Data
4- Aerial Image Registration
5- Tree Annotations in LiDAR Data
6- 3D Point Cloud Feature Explanations
7- Conclusions and Future WorkNuméro de notice : 28302 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD Thesis : Science and Engineering : University of Manchester : 2020 DOI : sans En ligne : https://www.research.manchester.ac.uk/portal/files/184627877/FULL_TEXT.PDF Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98051
Titre : Learning 3D generation and matching Type de document : Thèse/HDR Auteurs : Thibault Groueix, Auteur ; Mathieu Aubry, Directeur de thèse Editeur : Paris : Ecole Nationale des Ponts et Chaussées ENPC Année de publication : 2020 Importance : 169 p. Format : 21 x 30 cm Note générale : bibliographie
A doctoral thesis in the domain of automated signal and image processing submitted to École Doctorale Paris-Est
Mathématiques et Sciences et Technologies de l’Information et de la CommunicationLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de formes
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déformation de surface
[Termes IGN] isométrie
[Termes IGN] maillage
[Termes IGN] modélisation 3D
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] voxelIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software. We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching. A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel. Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image. Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans. Note de contenu : 1- Introduction
2 Related Work
3 AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation
4 3D-CODED : 3D Correspondences by Deep Deformation
5 Unsupervised cycle-consistent deformation for shape matching
6 ConclusionNuméro de notice : 28310 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Automated signal and image processing : Paris-Est : 2020 Organisme de stage : LIGM DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-03127055v2/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98201 A new method of equiangular sectorial voxelization of single-scan terrestrial laser scanning data and its applications in forest defoliation estimation / Langning Huo in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)PermalinkPairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets / Yusheng Xu in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)PermalinkVoxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods / Florent Poux in ISPRS International journal of geo-information, vol 8 n° 5 (May 2019)PermalinkConditional random field and deep feature learning for hyperspectral image classification / Fahim Irfan Alam in IEEE Transactions on geoscience and remote sensing, vol 57 n° 3 (March 2019)PermalinkA time‐geographic approach to quantifying wildlife–road interactions / Rebecca W. Loraamm in Transactions in GIS, vol 23 n° 1 (February 2019)PermalinkAnalyzing the role of pulse density and voxelization parameters on full-waveform LiDAR-derived metrics / Pablo Crespo-Peremarch in ISPRS Journal of photogrammetry and remote sensing, vol 146 (December 2018)PermalinkA greyscale voxel model for airborne lidar data applied to building detection / Liying Wang in Photogrammetric record, vol 33 n° 164 (December 2018)PermalinkAnalyzing the vertical distribution of crown material in mixed stand composed of two temperate tree species / Olivier Martin-Ducup in Forests, vol 9 n° 11 (November 2018)PermalinkSDF-2-SDF registration for real-time 3D reconstruction from RGB-D data / Miroslava Slavcheva in International journal of computer vision, vol 126 n° 6 (June 2018)PermalinkA voxel- and graph-based strategy for segmenting man-made infrastructures using perceptual grouping laws: comparison and evaluation / Yusheng Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)Permalink