Descripteur
Termes IGN > géomatique > données localisées
données localiséesSynonyme(s)spatial data ;données géospatiales ;données géographiques données à référence spatialeVoir aussi |
Documents disponibles dans cette catégorie (3735)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering / Shangpeng Sun in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
![]()
[article]
Titre : Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering Type de document : Article/Communication Auteurs : Shangpeng Sun, Auteur ; Changying Li, Auteur ; Peng Wah Chee, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 195 - 207 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] cartographie 3D
[Termes IGN] classification basée sur les régions
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de la végétation
[Termes IGN] gestion de production
[Termes IGN] Gossypium (genre)
[Termes IGN] phénologie
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] surveillance de la végétationRésumé : (Auteur) Three-dimensional high throughput plant phenotyping techniques provide an opportunity to measure plant organ-level traits which can be highly useful to plant breeders. The number and locations of cotton bolls, which are the fruit of cotton plants and an important component of fiber yield, are arguably among the most important phenotypic traits but are complex to quantify manually. Hence, there is a need for effective and efficient cotton boll phenotyping solutions to support breeding research and monitor the crop yield leading to better production management systems. We developed a novel methodology for 3D cotton boll mapping within a plot in situ. Point clouds were reconstructed from multi-view images using the structure from motion algorithm. The method used a region-based classification algorithm that successfully accounted for noise due to sunlight. The developed density-based clustering method could estimate boll counts for this situation, in which bolls were in direct contact with other bolls. By applying the method to point clouds from 30 plots of cotton plants, boll counts, boll volume and position data were derived. The average accuracy of boll counting was up to 90% and the R2 values between fiber yield and boll number, as well as fiber yield and boll volume were 0.87 and 0.66, respectively. The 3D boll spatial distribution could also be analyzed using this method. This method, which was low-cost and provided improved site-specific data on cotton bolls, can also be applied to other plant/fruit mapping analysis after some modification. Numéro de notice : A2020-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.011 Date de publication en ligne : 25/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94561
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 195 - 207[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Tree annotations in LiDAR data using point densities and convolutional neural networks / Ananya Gupta in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)
![]()
[article]
Titre : Tree annotations in LiDAR data using point densities and convolutional neural networks Type de document : Article/Communication Auteurs : Ananya Gupta, Auteur ; Jonathan Byrne, Auteur ; David Moloney, Auteur Année de publication : 2020 Article en page(s) : pp 971 - 981 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] Dublin (Irlande ; ville)
[Termes IGN] extraction d'arbres
[Termes IGN] image spectrale
[Termes IGN] Montréal (Québec)
[Termes IGN] segmentation
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxel
[Termes IGN] zone urbaineRésumé : (auteur) LiDAR provides highly accurate 3-D point clouds. However, data need to be manually labeled in order to provide subsequent useful information. Manual annotation of such data is time-consuming, tedious, and error prone, and hence, in this article, we present three automatic methods for annotating trees in LiDAR data. The first method requires high-density point clouds and uses certain LiDAR data attributes for the purpose of tree identification, achieving almost 90% accuracy. The second method uses a voxel-based 3-D convolutional neural network on low-density LiDAR data sets and is able to identify most large trees accurately but struggles with smaller ones due to the voxelization process. The third method is a scaled version of the PointNet++ method and works directly on outdoor point clouds and achieves an F score of 82.1% on the ISPRS benchmark data set, comparable to the state-of-the-art methods but with increased efficiency. Numéro de notice : A2020-095 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2942201 Date de publication en ligne : 11/10/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2942201 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94658
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 2 (February 2020) . - pp 971 - 981[article]Spatio-temporal mobility and Twitter: 3D visualisation of mobility flows / Joaquín Osorio Arjona in Journal of maps, vol 16 n° 1 ([02/01/2020])
![]()
[article]
Titre : Spatio-temporal mobility and Twitter: 3D visualisation of mobility flows Type de document : Article/Communication Auteurs : Joaquín Osorio Arjona, Auteur ; Juan Carlos García Palomares, Auteur Année de publication : 2020 Article en page(s) : pp 153 - 160 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse spatio-temporelle
[Termes IGN] base de données localisées
[Termes IGN] données localisées des bénévoles
[Termes IGN] espace-temps
[Termes IGN] interface de programmation
[Termes IGN] Madrid (Espagne)
[Termes IGN] migration pendulaire
[Termes IGN] mobilité urbaine
[Termes IGN] réseau social
[Termes IGN] système d'information géographique
[Termes IGN] Time-geography
[Termes IGN] Twitter
[Termes IGN] visualisation 3D
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Recent progress in computation and the spatio-temporal richness of data obtained from new sources have invigorated Time Geography. It is now possible to visualise and represent movements of people in a dual spatial–temporal dimension. In this work, we use geo-located data from the social media platform Twitter to show the value of new data sources for Time Geography. The methodology consists of visualising space–time paths in 2D and 3D in four study zones, with different land-use profiles, based on tweets compiled over the course of two years. The results provide a view of behaviours occurring in the areas of study throughout the day, with complementary data to show the population's main activity at different times. Numéro de notice : A2020-645 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/17445647.2020.1778549 Date de publication en ligne : 18/06/2020 En ligne : https://doi.org/10.1080/17445647.2020.1778549 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96071
in Journal of maps > vol 16 n° 1 [02/01/2020] . - pp 153 - 160[article]
Titre : 3rd International Workshop on Spatial Data Quality (SDQ 2020) : Joint Workshop of EuroGeographics - EuroSDR - OGC - ISO TC 211 - ICA Type de document : Actes de congrès Auteurs : Jonathan Holmes, Éditeur scientifique ; Carol Agius, Éditeur scientifique ; Joep Crompvoets, Éditeur scientifique Editeur : Dublin : European Spatial Data Research EuroSDR Année de publication : 2020 Collection : EuroSDR Workshop report Conférence : SDQ 2020, 3rd International Workshop on Spatial Data Quality 28/01/2020 29/01/2020 La Vallette Malte Open Access Proceedings Importance : 84 p. Format : 21 x 30 cm Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] données localisées
[Termes IGN] qualité des donnéesRésumé : (éditeur) This workshop report presents the highlights of the EuroGeographics/EuroSDR/OGC/ISO TC 211/ICA workshop on Spatial Data Quality that took place on 28 and 29 January 2020 in Valletta, Malta Numéro de notice : 14262 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Actes DOI : sans Date de publication en ligne : 01/12/2020 En ligne : http://www.eurosdr.net/sites/default/files/uploaded_files/eurosdr_spatial_data_q [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96899 Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
![]()
Titre : Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving Type de document : Thèse/HDR Auteurs : Edouard Capellier, Auteur ; Véronique Berge-Cherfaoui, Directeur de thèse ; Franck Davoine, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2020 Importance : 123 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur de l'UTC, Robotique et Sciences et Technologies de l'Information et des SystèmesLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] carte routière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] image RVB
[Termes IGN] intelligence artificielle
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The perception task is paramount for self-driving vehicles. Being able to extract accurate and significant information from sensor inputs is mandatory, so as to ensure a safe operation. The recent progresses of machine-learning techniques revolutionize the way perception modules, for autonomous driving, are being developed and evaluated, while allowing to vastly overpass previous state-of-the-art results in practically all the perception-related tasks. Therefore, efficient and accurate ways to model the knowledge that is used by a self-driving vehicle is mandatory. Indeed, self-awareness, and appropriate modeling of the doubts, are desirable properties for such system. In this work, we assumed that the evidence theory was an efficient way to finely model the information extracted from deep neural networks. Based on those intuitions, we developed three perception modules that rely on machine learning, and the evidence theory. Those modules were tested on real-life data. First, we proposed an asynchronous evidential occupancy grid mapping algorithm, that fused semantic segmentation results obtained from RGB images, and LIDAR scans. Its asynchronous nature makes it particularly efficient to handle sensor failures. The semantic information is used to define decay rates at the cell level, and handle potentially moving object. Then, we proposed an evidential classifier of LIDAR objects. This system is trained to distinguish between vehicles and vulnerable road users, that are detected via a clustering algorithm. The classifier can be reinterpreted as performing a fusion of simple evidential mass functions. Moreover, a simple statistical filtering scheme can be used to filter outputs of the classifier that are incoherent with regards to the training set, so as to allow the classifier to work in open world, and reject other types of objects. Finally, we investigated the possibility to perform road detection in LIDAR scans, from deep neural networks. We proposed two architectures that are inspired by recent state-of-the-art LIDAR processing systems. A training dataset was acquired and labeled in a semi-automatic fashion from road maps. A set of fused neural networks reaches satisfactory results, which allowed us to use them in an evidential road mapping and object detection algorithm, that manages to run at 10 Hz Note de contenu : 1- Introduction
2- Machine learning for perception in autonomous driving
3- The evidence theory, and its applications in autonomous driving
4- A synchronous evidential grid mapping from RGB images and LIDAR scans
5- Evidential LIDAR object classification
6- Road detection in LIDAR scans
7- Application of RoadSeg:evidential road surface mapping
8- ConclusionNuméro de notice : 25895 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Robotique et Sciences et Technologies de l'Information et des Systèmes : UTC : 2020 Organisme de stage : Laboratoire Heudiasyc nature-HAL : Thèse DOI : sans En ligne : https://hal.science/tel-02897810v1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96013 PermalinkPermalinkCartographie sémantique hybride de scènes urbaines à partir de données image et Lidar / Mohamed Boussaha (2020)
PermalinkConstraint based evaluation of generalized images generated by deep learning / Azelle Courtial (2020)
PermalinkContribution à la segmentation et à la modélisation 3D du milieu urbain à partir de nuages de points / Tania Landes (2020)
PermalinkCreation of inspirational Web Apps that demonstrate the functionalities offered by the ArcGIS API for JavaScript / Arthur Genet (2020)
PermalinkPermalinkDétection et vectorisation automatiqued’objets linéaires dans des nuages de points de voirie / Etienne Barçon (2020)
PermalinkPermalinkPermalink