Descripteur
Termes IGN > informatique > base de données > base de données orientée objet > base de données d'objets mobiles
base de données d'objets mobilesVoir aussi |
Documents disponibles dans cette catégorie (169)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Moving objects aware sensor mesh fusion for indoor reconstruction from a couple of 2D lidar scans / Teng Wu (2020)
Titre : Moving objects aware sensor mesh fusion for indoor reconstruction from a couple of 2D lidar scans Type de document : Article/Communication Auteurs : Teng Wu , Auteur ; Bruno Vallet , Auteur ; Cédric Demonceaux, Auteur ; Jingbin Liu, Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2020 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 43-B2 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ISPRS 2020, Commission 2, virtual Congress, Imaging today foreseeing tomorrow 31/08/2020 02/09/2020 Nice (en ligne) France Archives Commission 2 Importance : pp 507 - 514 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] données lidar
[Termes IGN] données localisées 2D
[Termes IGN] espace intérieur
[Termes IGN] fusion de données
[Termes IGN] objet mobile
[Termes IGN] reconstruction 3D
[Termes IGN] semis de pointsRésumé : (auteur) Indoor mapping attracts more attention with the development of 2D and 3D camera and Lidar sensor. Lidar systems can provide a very high resolution and accurate point cloud. When aiming to reconstruct the static part of the scene, moving objects should be detected and removed which can prove challenging. This paper proposes a generic method to merge meshes produced from Lidar data that allows to tackle the issues of moving objects removal and static scene reconstruction at once. The method is adapted to a platform collecting point cloud from two Lidar sensors with different scan direction, which will result in different quality. Firstly, a mesh is efficiently produced from each sensor by exploiting its natural topology. Secondly, a visibility analysis is performed to handle occlusions (due to varying viewpoints) and remove moving objects. Then, a boolean optimization allows to select which triangles should be removed from each mesh. Finally, a stitching method is used to connect the selected mesh pieces. Our method is demonstrated on a Navvis M3 (2D laser ranger system). Numéro de notice : C2020-008 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLIII-B2-2020-507-2020 Date de publication en ligne : 12/08/2020 En ligne : https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-507-2020 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95659 A polyhedra-based model for moving regions in databases / Florian Heinz in International journal of geographical information science IJGIS, vol 34 n° 1 (January 2020)
[article]
Titre : A polyhedra-based model for moving regions in databases Type de document : Article/Communication Auteurs : Florian Heinz, Auteur ; Ralf Hartmut Güting, Auteur Année de publication : 2020 Article en page(s) : pp 41 - 73 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Systèmes d'information géographique
[Termes IGN] base de données orientée objet
[Termes IGN] CGAL
[Termes IGN] implémentation (informatique)
[Termes IGN] isomorphisme
[Termes IGN] MADS
[Termes IGN] modélisation spatio-temporelle
[Termes IGN] objet mobile
[Termes IGN] polyèdreRésumé : (auteur) Moving objects databases store and process objects with a focus on their spatiotemporal behaviour. To achieve this, the model of the data must be suitable to efficiently store and process moving objects. Currently, a unit-based model is widely used, where each moving object is divided into one or more time intervals, during which the object behaves uniformly. This model is also used for a data type called moving regions, which resembles moving and shape changing regions as, for example, forest fires or cloud fields. However, this model struggles to support operations like union, difference or intersection of two moving regions; the resulting objects are unnecessarily bloated and uncomfortable to handle because the resulting number of units is generally very high. In this paper, an alternative model for moving regions is proposed, which is based on polyhedra. Furthermore, this work develops an isomorphism between moving regions and polyhedra including all relevant operations, which has the additional advantage that several implementations for those are already readily available; this is demonstrated by a reference implementation using the existing and well-tested Computational Geometry Algorithms Library (CGAL). Numéro de notice : A2020-007 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2019.1616090 Date de publication en ligne : 17/05/2019 En ligne : https://doi.org/10.1080/13658816.2019.1616090 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94387
in International journal of geographical information science IJGIS > vol 34 n° 1 (January 2020) . - pp 41 - 73[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 079-2020011 RAB Revue Centre de documentation En réserve L003 Disponible Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning / Clément Dechesne in Remote sensing, Vol 11 n° 24 (December-2 2019)
[article]
Titre : Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning Type de document : Article/Communication Auteurs : Clément Dechesne , Auteur ; Sébastien Lefèvre, Auteur ; Rodolphe Vadaine, Auteur ; Guillaume Hajduch, Auteur ; Ronan Fablet, Auteur Année de publication : 2019 Projets : SESAME / Fablet, Ronan Article en page(s) : n° 2997 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image Sentinel-SAR
[Termes IGN] navire
[Termes IGN] objet mobileRésumé : (auteur) The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m). Numéro de notice : A2019-632 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs11242997 Date de publication en ligne : 13/12/2019 En ligne : https://doi.org/10.3390/rs11242997 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95325
in Remote sensing > Vol 11 n° 24 (December-2 2019) . - n° 2997[article]Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]Scene context-driven vehicle detection in high-resolution aerial images / Chao Tao in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
[article]
Titre : Scene context-driven vehicle detection in high-resolution aerial images Type de document : Article/Communication Auteurs : Chao Tao, Auteur ; Li Mi, Auteur ; Yansheng Li, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 7339 - 7351 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] détection d'objet
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] objet mobile
[Termes IGN] véhicule automobileRésumé : (auteur) As the spatial resolution of remote sensing images is improving gradually, it is feasible to realize “scene-object” collaborative image interpretation. Unfortunately, this idea is not fully utilized in vehicle detection from high-resolution aerial images, and most of the existing methods may be promoted by considering the variability of vehicle spatial distribution in different image scenes and treating vehicle detection tasks scene-specific. With this motivation, a scene context-driven vehicle detection method is proposed in this paper. At first, we perform scene classification using the deep learning method and, then, detect vehicles in roads and parking lots separately through different vehicle detectors. Afterward, we further optimize the detection results using different postprocessing rules according to different scene types. Experimental results show that the proposed approach outperforms the state-of-the-art algorithms in terms of higher detection accuracy rate and lower false alarm rate. Numéro de notice : A2019-535 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2912985 Date de publication en ligne : 03/06/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2912985 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94131
in IEEE Transactions on geoscience and remote sensing > Vol 57 n° 10 (October 2019) . - pp 7339 - 7351[article]SMSM: a similarity measure for trajectory stops and moves / Andre L. Lehmann in International journal of geographical information science IJGIS, vol 33 n° 9 (September 2019)PermalinkRelative space-based GIS data model to analyze the group dynamics of moving objects / Mingxiang Feng in ISPRS Journal of photogrammetry and remote sensing, vol 153 (July 2019)PermalinkPatch-based detection of dynamic objects in CrowdCam images / Gagan Kanojia in The Visual Computer, vol 35 n° 4 (April 2019)PermalinkA conceptual framework for studying collective reactions to events in location-based social media / Alexander Dunkel in International journal of geographical information science IJGIS, Vol 33 n° 3-4 (March - April 2019)PermalinkLearning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkPoint clouds for direct pedestrian pathfinding in urban environments / Jesus Balado in ISPRS Journal of photogrammetry and remote sensing, vol 148 (February 2019)PermalinkRobust vehicle detection in aerial images using bag-of-words and orientation aware scanning / Hailing Zhou in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)PermalinkA data model for moving regions of fixed shape in databases / Florian Heinz in International journal of geographical information science IJGIS, vol 32 n° 9-10 (September - October 2018)PermalinkA context-based geoprocessing framework for optimizing meetup location of multiple moving objects along road networks / Shaohua Wang in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)PermalinkUsing interactions and dynamics for mining groups of moving objects from trajectory data / Corrado Loglisci in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)PermalinkRange-image: Incorporating sensor topology for lidar point cloud processing / Pierre Biasutti in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)PermalinkAttribute trajectory analysis : a framework to analyse attribute changes using trajectory analysis techniques / Long Zhang in International journal of geographical information science IJGIS, vol 32 n° 5-6 (May - June 2018)PermalinkPermalinkOccupancy modelling for moving object detection from Lidar point clouds: A comparative study / Wen Xiao in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2/W4 (September 2017)PermalinkThe geometry of space-time prisms with uncertain anchors / Bart Kuijpers in International journal of geographical information science IJGIS, vol 31 n° 9-10 (September - October 2017)PermalinkIndex-supported pattern matching on tuples of time-dependent values / Fabio Valdés in Geoinformatica, vol 21 n° 3 (July - September 2017)PermalinkDisocclusion of 3D LiDAR point clouds using range images / Pierre Biasutti in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-1/W1 (May 2017)PermalinkDesign principles of a stream-based framework for mobility analysis / Loic Salmon in Geoinformatica, vol 21 n° 2 (April - June 2017)PermalinkDistributed processing of big mobility data as spatio-temporal data streams / Zdravko Galić in Geoinformatica, vol 21 n° 2 (April - June 2017)PermalinkPanda∗: A generic and scalable framework for predictive spatio-temporal queries / Abdeltawab M. Hendawi in Geoinformatica, vol 21 n° 2 (April - June 2017)Permalink