Descripteur
Documents disponibles dans cette catégorie (799)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Analyse, structuration et sémantisation des images aériennes [diaporama] / Valérie Gouet-Brunet (2020)
Titre : Analyse, structuration et sémantisation des images aériennes [diaporama] Type de document : Article/Communication Auteurs : Valérie Gouet-Brunet , Auteur Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN (2012-) Année de publication : 2020 Projets : Alegoria / Gouet-Brunet, Valérie Conférence : Séminaire de recherche IGN 2020, De l’acquisition à la valorisation des big geodata du passé 24/02/2020 24/02/2020 Saint-Mandé France Projets : HIATUS / Giordano, Sébastien Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] géoréférencement indirect
[Termes IGN] image aérienne
[Termes IGN] image multitemporelle
[Termes IGN] indexation documentaire
[Termes IGN] indexation sémantique
[Termes IGN] indexation spatiale
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Les images aériennes, qu'elles soient verticales ou obliques, apportent un point de vue unique sur notre territoire et son évolution. Les fonds disponibles, numériques ou numérisés, sont de tailles variables entre centaines et millions d’éléments acquis à des périodes différentes, ils sont disséminés au sein de différentes institutions (archives, musées, agences cartographiques, blogs, etc.) et sont généralement répertoriés essentiellement par des métadonnées de qualité variable. Dans ce contexte, nous présenterons deux projets de recherche ANR portés par l'IGN, qui ont pour objectif commun d'apporter une structure spatio-temporelle à ces collections : le projet ALEGORIA se focalise sur l'indexation multimodale et la visualisation des collections institutionnelles iconographiques peu structurées, pour leur structuration, interconnexion et restitution auprès de chercheurs, experts ou utilisateurs en SHS ; le projet HIATUS se concentre quant à lui sur les campagnes de relevés aériens multidate, avec pour objectif la mise en oeuvre d'outils photogrammétriques dédiés à leur géoréférencement précis et à l'extraction d'informations sémantiques sur les états et les évolutions pour des cas d'usage d'utilisation de l'occupation des sols. Numéro de notice : C2020-027 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE Nature : Communication nature-HAL : ComSansActesPubliés-Unpublished DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97802 Documents numériques
peut être téléchargé
Analyse, structuration et sémantisation des images aériennes - pdf auteurAdobe Acrobat PDF Camera orientation, calibration and inverse perspective with uncertainties: a Bayesian method applied to area estimation from diverse photographs / Grégoire Guillet in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
[article]
Titre : Camera orientation, calibration and inverse perspective with uncertainties: a Bayesian method applied to area estimation from diverse photographs Type de document : Article/Communication Auteurs : Grégoire Guillet, Auteur ; Thomas Guillet, Auteur ; Ludovic Ravanel, Auteur Année de publication : 2020 Article en page(s) : pp 237 - 255 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] ajustement de paramètres
[Termes IGN] appariement d'images
[Termes IGN] autocorrélation spatiale
[Termes IGN] distorsion d'image
[Termes IGN] estimation bayesienne
[Termes IGN] étalonnage de chambre métrique
[Termes IGN] figuration de la densité
[Termes IGN] fonction inverse
[Termes IGN] image 2D
[Termes IGN] image aérienne
[Termes IGN] incertitude géométrique
[Termes IGN] longueur focale
[Termes IGN] méthode de Monte-Carlo par chaînes de Markov
[Termes IGN] modèle numérique de surface
[Termes IGN] orientation externe
[Termes IGN] photographie numérique
[Termes IGN] vue 3D
[Termes IGN] vue perspectiveRésumé : (Auteur) Large collections of images have become readily available through modern digital catalogs, from sources as diverse as historical photographs, aerial surveys, or user-contributed pictures. Exploiting the quantitative information present in such wide-ranging collections can greatly benefit studies that follow the evolution of landscape features over decades, such as measuring areas of glaciers to study their shrinking under climate change. However, many available images were taken with low-quality lenses and unknown camera parameters. Useful quantitative data may still be extracted, but it becomes important to both account for imperfect optics, and estimate the uncertainty of the derived quantities. In this paper, we present a method to address both these goals, and apply it to the estimation of the area of a landscape feature traced as a polygon on the image of interest. The technique is based on a Bayesian formulation of the camera calibration problem. First, the probability density function (PDF) of the unknown camera parameters is determined for the image, based on matches between 2D (image) and 3D (world) points together with any available prior information. In a second step, the posterior distribution of the feature area of interest is derived from the PDF of camera parameters. In this step, we also model systematic errors arising in the polygon tracing process, as well as uncertainties in the digital elevation model. The resulting area PDF therefore accounts for most sources of uncertainty. We present validation experiments, and show that the model produces accurate and consistent results. We also demonstrate that in some cases, accounting for optical lens distortions is crucial for accurate area determination with consumer-grade lenses. The technique can be applied to many other types of quantitative features to be extracted from photographs when careful error estimation is important. Numéro de notice : A2020-015 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.013 Date de publication en ligne : 02/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.013 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94404
in ISPRS Journal of photogrammetry and remote sensing > vol 159 (January 2020) . - pp 237 - 255[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020013 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020012 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Cattle detection and counting in UAV images based on convolutional neural networks / Wen Shao in International Journal of Remote Sensing IJRS, vol 41 n° 1 (01 - 08 janvier 2020)
[article]
Titre : Cattle detection and counting in UAV images based on convolutional neural networks Type de document : Article/Communication Auteurs : Wen Shao, Auteur ; Rei Kawakami, Auteur ; Ryota Yoshihashi, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 31 - 52 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] bovin
[Termes IGN] chevauchement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] modélisation 3DRésumé : (auteur) For assistance with grazing cattle management, we propose a cattle detection and counting system based on Convolutional Neural Networks (CNNs) using aerial images taken by an Unmanned Aerial Vehicle (UAV). To improve detection performance, we take advantage of the fact that, with UAV images, the approximate size of the objects can be predicted when the UAV’s height from the ground can be assumed to be roughly constant. We resize an image to be fed into the CNN to an optimum resolution determined by the object size and the down-sampling rate of the network, both in training and testing. To avoid repetition of counting in images that have large overlaps to adjacent ones and to obtain the accurate number of cattle in an entire area, we utilize a three-dimensional model reconstructed by the UAV images for merging the detection results of the same target. Experiments show that detection performance is greatly improved when using the optimum input resolution with an F-measure of 0.952, and counting results are close to the ground truths when the movement of cattle is approximately stationary compared to that of the UAV’s. Numéro de notice : A2020-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/01431161.2019.1624858 Date de publication en ligne : 11/06/2019 En ligne : https://doi.org/10.1080/01431161.2019.1624858 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94891
in International Journal of Remote Sensing IJRS > vol 41 n° 1 (01 - 08 janvier 2020) . - pp 31 - 52[article]
Titre : Collaborative visual-inertial state and scene estimation Type de document : Thèse/HDR Auteurs : Marco Karrer, Auteur ; Margarita Chli, Directeur de thèse Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2020 Importance : 151 p. Format : 21 x 30 cm Note générale : bibliographie
A thesis submitted to attain the degree of Doctor of Sciences of ETH Zurich in Mechanical EngineeringLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] centrale inertielle
[Termes IGN] compensation par faisceaux
[Termes IGN] estimation de pose
[Termes IGN] image captée par drone
[Termes IGN] reconstruction d'objet
[Termes IGN] robotique
[Termes IGN] système multi-agents
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The capability of a robot to create a map of its workspace on the fly, while constantly updating it and continuously estimating its motion in it, constitutes one of the central research problems in mobile robotics and is referred to as Simultaneous Localization And Mapping (SLAM) in the literature. Relying solely on the sensor-suite onboard the robot, SLAM is a core building block in enabling the navigational autonomy necessary to facilitate the general use of mobile robots and has been the subject of booming research interest spanning over three decades. With the largest body of related literature addressing the challenge of single-agent SLAM, it is only very recently, with the relative maturity of this field that approaches tackling collaborative SLAM with multiple agents have started appearing. The potential of collaborative multi-agent SLAM is great; not only promising to boost the efficiency of robotic missions by splitting the task at hand to more agents but also to improve the overall robustness and accuracy by boosting the amount of data that each agent’s estimation process has access to. While SLAM can be performed using a variety of different sensors, this thesis is focused on the fusion of visual and inertial cues, as one of the most common combinations of sensing modalities in robotics today. The information richness captured by cameras, along with the high-frequency and metric information provided by Inertial Measurement Units (IMUs) in combination with the low weight and power consumption offered by a visual-inertial sensor suite render this setup ideal for a wide variety of applications and robotic platforms, in particular to resource-constrained platforms such as Unmanned Aerial Vehicles (UAVs). The majority of the state-of-the-art visual-inertial estimators are designed as odometry algorithms, providing only estimates consistent within a limited time-horizon. This lack in global consistency of estimates, however, poses a major hurdle in an effective fusion of data from multiple agents and the practi- cal definition of a common reference frame, which is imperative before collaborative effort can be coordinated. In this spirit, this thesis investigates the potential of global optimization, based on a central access point (server) as a first approach, demonstrating global consistency using only monocular-inertial data. Fusing data from multiple agents, not only consistency can be maintained, but also the accuracy is shown to improve at times, revealing the great potential of collaborative SLAM. Aiming at improving the computational efficiency, in a second approach a more efficient system architecture is employed, allowing a more suitable distribution of the computational load amongst the agents and the server. Furthermore, the architecture implements a two-way communication enabling a tighter collaboration between the agents as they become capable of re-using information captured by other agents through communication with the server, enabling improvements of their onboard pose tracking online, during the mission. In addition to general collaborative SLAM without specific assumptions on the agents’ relative pose configuration, we investigate the potential of a configuration with two agents, carrying one camera each with overlapping fields of view, essentially forming a virtual stereo camera. With the ability of each robotic agent to move independently, the potential to control the stereo baseline according to the scene depth is very promising, for example at high altitudes where all scene points are far away and, therefore, only provide weak constraints on the metric scale in a standard single-agent system. To this end, an approach to estimate the time-varying stereo transformation formed between two agents is proposed, by fusing the egomotion estimates of the individual agents along with the image measurements extracted from the view-overlap in a tightly coupled fashion. Taking this virtual stereo camera idea a step further, a novel collaboration framework is presented, utilizing the view-overlap along with relative distance measurements across the two agents (e.g. obtained via Ultra-Wide Band (UWB) modules), in order to successfully perform state estimation at high altitudes where state-of-the-art single-agent methods fail. In the interest of low-latency pose estimation, each agent holds its own estimate of the map, while consistency between the agents is achieved using a novel consensus-based sliding window bundle adjustment. Despite that in this work, experiments are shown in a two-agent setup, the proposed distributed bundle adjustment scheme holds great potential for scaling up to larger problems with multiple agents, due to the asynchronicity of the proposed estimation process and the high level of parallelism it permits. The majority of the developed approaches in this thesis rely on sparse feature maps in order to allow for efficient and timely pose estimation, however, this translates to reduced awareness of the spatial structure of a robot’s workspace, which can be insufficient for tasks requiring careful scene interaction and manipulation of objects. Equipping a typical visual-inertial sensor suite with an RGB-D camera, an add-on framework is presented that enables the efficient fusion of naturally noisy depth information into an accurate, local, dense map of the scene, providing sufficient information for an agent to plan contact with a surface. With the focus on collaborative SLAM using visual-inertial data, the approaches and systems presented in this thesis contribute towards achieving collaborative Visual-Inertial SLAM (VI-SLAM) deployable in challenging real-world scenarios, where the participating agents’ experiences get fused and processed at a central access point. On the other side, it is shown that taking advantage of specific configurations can push the collaboration amongst the agents towards achieving greater general robustness and accuracy of scene and egomotion estimates in scenarios, where state-of-the-art single-agent systems are otherwise unsuccessful, paving the way towards intelligent robot collaboration. Note de contenu : Introduction
1- Real-time dense surface reconstruction for aerial manipulation
2- Towards globally consistent visual-inertial collaborative SLAM
3- CVI-SLAM – collaborative visual-inertial SLAM
4- Collaborative 6DoF relative pose estimation for two UAVs with overlapping fields of view
5- Distributed variable-baseline stereo SLAM from two UAVsNuméro de notice : 28318 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD Thesis : Mechanical Engineering : ETH Zurich : 2020 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/465334 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98251 Comparison of multi-seasonal Landsat 8, Sentinel-2 and hyperspectral images for mapping forest alliances in Northern California / Matthew L. Clark in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
[article]
Titre : Comparison of multi-seasonal Landsat 8, Sentinel-2 and hyperspectral images for mapping forest alliances in Northern California Type de document : Article/Communication Auteurs : Matthew L. Clark, Auteur Année de publication : 2020 Article en page(s) : pp 26 - 40 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse comparative
[Termes IGN] apprentissage automatique
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] carte forestière
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] couvert végétal
[Termes IGN] image AVIRIS
[Termes IGN] image hyperspectrale
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-MSI
[Termes IGN] occupation du sol
[Termes IGN] Short Waves InfraRedRésumé : (Auteur) The current era of earth observation now provides constellations of open-access, multispectral satellite imagery with medium spatial resolution, greatly increasing the frequency of cloud-free data for analysis. The Landsat satellites have a long historical record, while the newer Sentinel-2 (S2) satellites offer higher temporal, spatial and spectral resolution. The goal of this study was to evaluate the relative benefits of single- and multi-seasonal multispectral satellite data for discriminating detailed forest alliances, as defined by the U.S. National Vegetation Classification system, in a Mediterranean-climate landscape (Sonoma County, California). Results were compared to a companion analysis of simulated hyperspectral satellite data (HyspIRI) for the same study site and reference data (Clark et al., 2018). Experiments used real and simulated S2 and Landsat 8 (L8) data. Simulated S2 and L8 were from HyspIRI images, thereby focusing results on differences in spectral resolution rather than other confounding factors. The Support Vector Machine (SVM) classifier was used in a hierarchical classification of land-cover (Level 1), followed by alliances (Level 2) in forest pixels, and included summer-only and multi-seasonal sets of predictor variables (bands, indices and bands plus indices). Both real and simulated multi-seasonal multispectral variables significantly improved overall accuracy (OA) by 0.2–1.6% for Level 1 tree/no tree classifications and 3.6–25.8% for Level 2 forest alliances. Classifiers with S2 variables tended to be more accurate than L8 variables, particularly for S2, which had 0.4–2.1% and 5.1–11.8% significantly higher OA than L8 for Level 1 tree/no tree and Level 2 forest alliances, respectively. Combining multispectral bands and indices or using just bands was generally more accurate than relying on just indices for classification. Simulated HyspIRI variables from past research had significantly greater accuracy than real L8 and S2 variables, with an average OA increase of 8.2–12.6%. A final alliance-level map used for a deeper analysis used simulated multi-seasonal S2 bands and indices, which had an overall accuracy of 74.3% (Kappa = 0.70). The accuracy of this classification was only 1.6% significantly lower than the best HyspIRI-based classification, which used multi-seasonal metrics (Clark et al., 2018), and there were alliances where the S2-based classifier was more accurate. Within the context of these analyses and study area, S2 spectral-temporal data demonstrated a strong capability for mapping global forest alliances, or similar detailed floristic associations, at medium spatial resolutions (10–30 m). Numéro de notice : A2020-011 Affiliation des auteurs : non IGN Thématique : FORET/GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.007 Date de publication en ligne : 14/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94399
in ISPRS Journal of photogrammetry and remote sensing > vol 159 (January 2020) . - pp 26 - 40[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020013 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020012 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt PermalinkPermalinkPermalinkGénération de cartes tactiles photoréalistes pour personnes déficientes visuelles par apprentissage profond / Gauthier Fillières-Riveau in Revue internationale de géomatique, vol 30 n° 1-2 (janvier - juin 2020)PermalinkDe l’image optique "multi-stéréo" à la topographie très haute résolution et la cartographie automatique des failles par apprentissage profond / Lionel Matteo (2020)PermalinkPermalinkIndividual tree detection and classification for mapping pine wilt disease using multispectral and visible color imagery acquired from unmanned aerial vehicle / Takeshi Hoshikawa in Journal of The Remote Sensing Society of Japan, vol 40 n° 1 (2020)PermalinkPermalinkInversion de données PolSAR en bande P pour l'estimation de la biomasse forestière / Colette Gelas (2020)PermalinkOn the adjustment, calibration and orientation of drone photogrammetry and laser-scanning / Emmanuel Clédat (2020)Permalink