Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > reconnaissance de formes > reconnaissance d'objets
reconnaissance d'objets |
Documents disponibles dans cette catégorie (61)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Zhilong You, Auteur ; Guixi Shen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 115 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image captée par drone
[Termes IGN] reconnaissance d'objets
[Termes IGN] route
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (auteur) Urban management and survey departments have begun investigating the feasibility of acquiring data from various laser scanning systems for urban infrastructure measurements and assessments. Roadside objects such as cars, trees, traffic poles, pedestrians, bicycles and e-bicycles describe the static and dynamic urban information available for acquisition. Because of the unstructured nature of 3D point clouds, the rich targets in complex road scenes, and the varying scales of roadside objects, finely classifying these roadside objects from various point clouds is a challenging task. In this paper, we integrate two representations of roadside objects, point clouds and multiview images to propose a point-group-view network named PGVNet for classifying roadside objects into cars, trees, traffic poles, and small objects (pedestrians, bicycles and e-bicycles) from generalized point clouds. To utilize the topological information of the point clouds, we propose a graph attention convolution operation called AtEdgeConv to mine the relationship among the local points and to extract local geometric features. In addition, we employ a hierarchical view-group-object architecture to diminish the redundant information between similar views and to obtain salient viewwise global features. To fuse the local geometric features from the point clouds and the global features from multiview images, we stack an attention-guided fusion network in PGVNet. In particular, we quantify and leverage the global features as an attention mask to capture the intrinsic correlation and discriminability of the local geometric features, which contributes to recognizing the different roadside objects with similar shapes. To verify the effectiveness and generalization of our methods, we conduct extensive experiments on six test datasets of different urban scenes, which were captured by different laser scanning systems, including mobile laser scanning (MLS) systems, unmanned aerial vehicle (UAV)-based laser scanning (ULS) systems and backpack laser scanning (BLS) systems. Experimental results, and comparisons with state-of-the-art methods, demonstrate that the PGVNet model is able to effectively identify various cars, trees, traffic poles and small objects from generalized point clouds, and achieves promising performances on roadside object classifications, with an overall accuracy of 95.76%. Our code is released on https://github.com/flidarcode/PGVNet. Numéro de notice : A2022-756 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.022 Date de publication en ligne : 22/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101759
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 115 - 136[article]Attributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision / Anis Amziane (2022)
Titre : Attributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision Type de document : Thèse/HDR Auteurs : Anis Amziane, Auteur ; Ludovic Macaire, Directeur de thèse Editeur : Lille : Université de Lille Année de publication : 2022 Importance : 214 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse pour obtenir le grade de Docteur de l'Université de Lille, spécialité Automatique, Génie Informatique, Traitement du Signal et des ImagesLangues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] agriculture de précision
[Termes IGN] bande spectrale
[Termes IGN] classification dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] éclairage
[Termes IGN] exitance spectrale
[Termes IGN] extraction de la végétation
[Termes IGN] rayonnement proche infrarouge
[Termes IGN] reconnaissance d'objets
[Termes IGN] réflectance végétale
[Termes IGN] signature spectraleIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The main objective of this work is to develop an automatic recognition system of crop and weed plants in field conditions. In Chapter 2 we describe the formation of multispectral radiance images under the Lambertian surface assumption and the different devices that can be used to acquire such images. We then provide a detailed description of the multispectral camera used in this study. Because radiance multispectral images are acquired under varying illumination, we propose an original multispectral image formation model that takes the variation of illumination conditions into account. In chapter 3, we estimate the reflectance as an illumination-invariant spectral signature. First, we present state-of-the-art methods that can be used to estimate the reflectance from multispectral images. We then introduce the reference state-of-the-art method for reflectance estimation and de- scribe our proposed method for reflectance estimation under varying illumination. Chapter 4 focuses on estimated reflectance assessment. The quality of reflectance estimated by our method is evaluated against state-of-the-art methods, and its contribution to supervised crop/weed recognition is demonstrated. Chapter 5 addresses the dimension reduction issue. The acquired multispectral images are composed of a high number of spectral channels, whose analysis is memory and time consuming. Moreover, spectral bands associated to these channels may be redundant or contain highly correlated spectral information. Therefore, we select the best spectral bands for crop/weed classification and use them to specify a camera suited for crop/weed recognition.Chapter 6 deals with the problem of spatio-spectral feature extraction from multispectral images. We propose an approach that extracts both spatial and spectral information at reduced computation costs based on a CNN. Its contribution to crop/weed recognition is demonstrated. Note de contenu : 1- Introduction
2- Multispectral imaging
3- Reflectance estimation
4- Reflectance estimation assessment
5- dimension reduction
6- Raw textures features for crop/weed recognition
ConclusionNuméro de notice : 24102 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Organisme de stage : Laboratoire Cristal (Lille) DOI : sans En ligne : https://www.theses.fr/2022ULILB020 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102577
Titre : Location retrieval using qualitative place signatures of visible landmarks Type de document : Article/Communication Auteurs : Lijun Wei , Auteur ; Valérie Gouet-Brunet , Auteur ; Anthony Cohn, Auteur Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2022 Projets : 1-Pas de projet / Importance : 52 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] descripteur
[Termes IGN] lieu géométrique
[Termes IGN] point de repère
[Termes IGN] reconnaissance d'objets
[Termes IGN] relation spatialeRésumé : (auteur) Location retrieval based on visual information is to retrieve the location of an agent (e.g. human, robot) or the area they see by comparing the observations with a certain form of representation of the environment. Existing methods generally require precise measurement and storage of the observed environment features, which may not always be robust due to the change of season, viewpoint, occlusion, etc. They are also challenging to scale up and may not be applicable for humans due to the lack of measuring/imaging devices. Considering that humans often use less precise but easily produced qualitative spatial language and high-level semantic landmarks when describing an environment, a qualitative location retrieval method is proposed in this work by describing locations/places using qualitative place signatures (QPS), defined as the perceived spatial relations between ordered pairs of co-visible landmarks from viewers' perspective. After dividing the space into place cells each with individual signatures attached, a coarse-to-fine location retrieval method is proposed to efficiently identify the possible location(s) of viewers based on their qualitative observations. The usability and effectiveness of the proposed method were evaluated using openly available landmark datasets, together with simulated observations by considering the possible perception error. Numéro de notice : P2022-009 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Preprint nature-HAL : Préprint DOI : 10.48550/arXiv.2208.00783 Date de publication en ligne : 26/07/2022 En ligne : https://doi.org/10.48550/arXiv.2208.00783 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101879 Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)
[article]
Titre : Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne Type de document : Article/Communication Auteurs : Valentin Desbiolles, Auteur Année de publication : 2021 Article en page(s) : pp 33 - 38 Note générale : Bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] Autocad Map
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dessin assisté par ordinateur
[Termes IGN] détection automatique
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] jumeau numérique
[Termes IGN] orthoimage
[Termes IGN] reconnaissance d'objets
[Termes IGN] transformation de Hough
[Termes IGN] voie ferréeRésumé : (Auteur) Ce projet propose une étude sur l’insertion automatique d’objets utiles au fonctionnement d’une voie ferrée dans un plan DAO. Ces objets sont visibles sur des orthophotos acquises par moyens aéroportés (drone ou hélicoptère). La solution se scinde en deux grands axes : 1- la détection et la localisation des objets d’intérêt sur une orthophoto ; 2- leurs insertions dans un plan DAO. Ce PFE parcourt ainsi les différentes techniques pour automatiser une phase de reconnaissance de certains éléments cibles sur une image pour finir sur le développement d’une méthode permettant de les reporter dans un plan DAO automatiquement. Numéro de notice : A2021-462 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Date de publication en ligne : 01/06/2021 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97928
in XYZ > n° 167 (juin 2021) . - pp 33 - 38[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2021021 RAB Revue Centre de documentation En réserve L003 Disponible Multiple convolutional features in Siamese networks for object tracking / Zhenxi Li in Machine Vision and Applications, vol 32 n° 3 (May 2021)
[article]
Titre : Multiple convolutional features in Siamese networks for object tracking Type de document : Article/Communication Auteurs : Zhenxi Li, Auteur ; Guillaume-Alexandre Bilodeau, Auteur ; Wassim Bouachir, Auteur Année de publication : 2021 Article en page(s) : n° 59 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] poursuite de cible
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Siamese trackers demonstrated high performance in object tracking due to their balance between accuracy and speed. Unlike classification-based CNNs, deep similarity networks are specifically designed to address the image similarity problem and thus are inherently more appropriate for the tracking task. However, Siamese trackers mainly use the last convolutional layers for similarity analysis and target search, which restricts their performance. In this paper, we argue that using a single convolutional layer as feature representation is not an optimal choice in a deep similarity framework. We present a Multiple Features-Siamese Tracker (MFST), a novel tracking algorithm exploiting several hierarchical feature maps for robust tracking. Since convolutional layers provide several abstraction levels in characterizing an object, fusing hierarchical features allows to obtain a richer and more efficient representation of the target. Moreover, we handle the target appearance variations by calibrating the deep features extracted from two different CNN models. Based on this advanced feature representation, our method achieves high tracking accuracy, while outperforming the standard siamese tracker on object tracking benchmarks. The source code and trained models are available at https://github.com/zhenxili96/MFST. Numéro de notice : A2021-470 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01185-7 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1007/s00138-021-01185-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97903
in Machine Vision and Applications > vol 32 n° 3 (May 2021) . - n° 59[article]Activity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)PermalinkEmotional habitat: mapping the global geographic distribution of human emotion with physical environmental factors using a species distribution model / Yizhuo Li in International journal of geographical information science IJGIS, vol 35 n° 2 (February 2021)PermalinkUnsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)PermalinkImproving traffic sign recognition results in urban areas by overcoming the impact of scale and rotation / Roholah Yazdan in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)PermalinkExploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)PermalinkHierarchical instance recognition of individual roadside trees in environmentally complex urban areas from UAV laser scanning point clouds / Yongjun Wang in ISPRS International journal of geo-information, vol 9 n° 10 (October 2020)PermalinkDeep learning for enrichment of vector spatial databases: Application to highway interchange / Guillaume Touya in ACM Transactions on spatial algorithms and systems, TOSAS, vol 6 n° 3 (May 2020)PermalinkClassification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network / Yueguan Yan in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)PermalinkDéveloppement de la photogrammétrie et d'analyses d'images pour l'étude et le suivi d'habitats marins / Guilhem Marre (2020)PermalinkInteractions between hierarchical learning and visual system modeling : image classification on small datasets / Thalita Firmo Drumond (2020)Permalink