Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > reconnaissance de formes > reconnaissance d'objets
reconnaissance d'objets |
Documents disponibles dans cette catégorie (57)



Etendre la recherche sur niveau(x) vers le bas
A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
![]()
[article]
Titre : A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Zhilong You, Auteur ; Guixi Shen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 115 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image captée par drone
[Termes IGN] reconnaissance d'objets
[Termes IGN] route
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (auteur) Urban management and survey departments have begun investigating the feasibility of acquiring data from various laser scanning systems for urban infrastructure measurements and assessments. Roadside objects such as cars, trees, traffic poles, pedestrians, bicycles and e-bicycles describe the static and dynamic urban information available for acquisition. Because of the unstructured nature of 3D point clouds, the rich targets in complex road scenes, and the varying scales of roadside objects, finely classifying these roadside objects from various point clouds is a challenging task. In this paper, we integrate two representations of roadside objects, point clouds and multiview images to propose a point-group-view network named PGVNet for classifying roadside objects into cars, trees, traffic poles, and small objects (pedestrians, bicycles and e-bicycles) from generalized point clouds. To utilize the topological information of the point clouds, we propose a graph attention convolution operation called AtEdgeConv to mine the relationship among the local points and to extract local geometric features. In addition, we employ a hierarchical view-group-object architecture to diminish the redundant information between similar views and to obtain salient viewwise global features. To fuse the local geometric features from the point clouds and the global features from multiview images, we stack an attention-guided fusion network in PGVNet. In particular, we quantify and leverage the global features as an attention mask to capture the intrinsic correlation and discriminability of the local geometric features, which contributes to recognizing the different roadside objects with similar shapes. To verify the effectiveness and generalization of our methods, we conduct extensive experiments on six test datasets of different urban scenes, which were captured by different laser scanning systems, including mobile laser scanning (MLS) systems, unmanned aerial vehicle (UAV)-based laser scanning (ULS) systems and backpack laser scanning (BLS) systems. Experimental results, and comparisons with state-of-the-art methods, demonstrate that the PGVNet model is able to effectively identify various cars, trees, traffic poles and small objects from generalized point clouds, and achieves promising performances on roadside object classifications, with an overall accuracy of 95.76%. Our code is released on https://github.com/flidarcode/PGVNet. Numéro de notice : A2022-756 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.022 Date de publication en ligne : 22/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101759
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 115 - 136[article]
Titre : Location retrieval using qualitative place signatures of visible landmarks Type de document : Article/Communication Auteurs : Lijun Wei , Auteur ; Valérie Gouet-Brunet
, Auteur ; Anthony Cohn, Auteur
Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2022 Projets : 1-Pas de projet / Importance : 52 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] descripteur
[Termes IGN] lieu
[Termes IGN] point de repère
[Termes IGN] reconnaissance d'objets
[Termes IGN] relation spatialeRésumé : (auteur) Location retrieval based on visual information is to retrieve the location of an agent (e.g. human, robot) or the area they see by comparing the observations with a certain form of representation of the environment. Existing methods generally require precise measurement and storage of the observed environment features, which may not always be robust due to the change of season, viewpoint, occlusion, etc. They are also challenging to scale up and may not be applicable for humans due to the lack of measuring/imaging devices. Considering that humans often use less precise but easily produced qualitative spatial language and high-level semantic landmarks when describing an environment, a qualitative location retrieval method is proposed in this work by describing locations/places using qualitative place signatures (QPS), defined as the perceived spatial relations between ordered pairs of co-visible landmarks from viewers' perspective. After dividing the space into place cells each with individual signatures attached, a coarse-to-fine location retrieval method is proposed to efficiently identify the possible location(s) of viewers based on their qualitative observations. The usability and effectiveness of the proposed method were evaluated using openly available landmark datasets, together with simulated observations by considering the possible perception error. Numéro de notice : P2022-009 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Preprint nature-HAL : Préprint DOI : 10.48550/arXiv.2208.00783 Date de publication en ligne : 26/07/2022 En ligne : https://doi.org/10.48550/arXiv.2208.00783 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101879 Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)
[article]
Titre : Reconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne Type de document : Article/Communication Auteurs : Valentin Desbiolles, Auteur Année de publication : 2021 Article en page(s) : pp 33 - 38 Note générale : Bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] Autocad Map
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dessin assisté par ordinateur
[Termes IGN] détection automatique
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] jumeau numérique
[Termes IGN] orthoimage
[Termes IGN] reconnaissance d'objets
[Termes IGN] transformation de Hough
[Termes IGN] voie ferréeRésumé : (Auteur) Ce projet propose une étude sur l’insertion automatique d’objets utiles au fonctionnement d’une voie ferrée dans un plan DAO. Ces objets sont visibles sur des orthophotos acquises par moyens aéroportés (drone ou hélicoptère). La solution se scinde en deux grands axes : 1- la détection et la localisation des objets d’intérêt sur une orthophoto ; 2- leurs insertions dans un plan DAO. Ce PFE parcourt ainsi les différentes techniques pour automatiser une phase de reconnaissance de certains éléments cibles sur une image pour finir sur le développement d’une méthode permettant de les reporter dans un plan DAO automatiquement. Numéro de notice : A2021-462 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Date de publication en ligne : 01/06/2021 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97928
in XYZ > n° 167 (juin 2021) . - pp 33 - 38[article]Réservation
Réserver ce documentExemplaires (2)
Code-barres Cote Support Localisation Section Disponibilité 112-2021021 SL Revue Centre de documentation Revues en salle Disponible 112-2021022 SL Revue Centre de documentation Revues en salle Disponible Multiple convolutional features in Siamese networks for object tracking / Zhenxi Li in Machine Vision and Applications, vol 32 n° 3 (May 2021)
![]()
[article]
Titre : Multiple convolutional features in Siamese networks for object tracking Type de document : Article/Communication Auteurs : Zhenxi Li, Auteur ; Guillaume-Alexandre Bilodeau, Auteur ; Wassim Bouachir, Auteur Année de publication : 2021 Article en page(s) : n° 59 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] poursuite de cible
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Siamese trackers demonstrated high performance in object tracking due to their balance between accuracy and speed. Unlike classification-based CNNs, deep similarity networks are specifically designed to address the image similarity problem and thus are inherently more appropriate for the tracking task. However, Siamese trackers mainly use the last convolutional layers for similarity analysis and target search, which restricts their performance. In this paper, we argue that using a single convolutional layer as feature representation is not an optimal choice in a deep similarity framework. We present a Multiple Features-Siamese Tracker (MFST), a novel tracking algorithm exploiting several hierarchical feature maps for robust tracking. Since convolutional layers provide several abstraction levels in characterizing an object, fusing hierarchical features allows to obtain a richer and more efficient representation of the target. Moreover, we handle the target appearance variations by calibrating the deep features extracted from two different CNN models. Based on this advanced feature representation, our method achieves high tracking accuracy, while outperforming the standard siamese tracker on object tracking benchmarks. The source code and trained models are available at https://github.com/zhenxili96/MFST. Numéro de notice : A2021-470 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01185-7 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1007/s00138-021-01185-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97903
in Machine Vision and Applications > vol 32 n° 3 (May 2021) . - n° 59[article]Activity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)
![]()
[article]
Titre : Activity recognition in residential spaces with Internet of things devices and thermal imaging Type de document : Article/Communication Auteurs : Kshirasagar Naik, Auteur ; Tejas Pandit, Auteur ; Nitin Naik, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 988 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] compréhension de l'image
[Termes IGN] contrôle par télédétection
[Termes IGN] détection d'événement
[Termes IGN] espace intérieur
[Termes IGN] image RVB
[Termes IGN] image thermique
[Termes IGN] intelligence artificielle
[Termes IGN] internet des objets
[Termes IGN] itération
[Termes IGN] modèle stéréoscopique
[Termes IGN] objet mobile
[Termes IGN] reconnaissance automatique
[Termes IGN] reconnaissance d'objets
[Termes IGN] scène 3DRésumé : (auteur) In this paper, we design algorithms for indoor activity recognition and 3D thermal model generation using thermal images, RGB images, captured from external sensors, and the internet of things setup. Indoor activity recognition deals with two sub-problems: Human activity and household activity recognition. Household activity recognition includes the recognition of electrical appliances and their heat radiation with the help of thermal images. A FLIR ONE PRO camera is used to capture RGB-thermal image pairs for a scene. Duration and pattern of activities are also determined using an iterative algorithm, to explore kitchen safety situations. For more accurate monitoring of hazardous events such as stove gas leakage, a 3D reconstruction approach is proposed to determine the temperature of all points in the 3D space of a scene. The 3D thermal model is obtained using the stereo RGB and thermal images for a particular scene. Accurate results are observed for activity detection, and a significant improvement in the temperature estimation is recorded in the 3D thermal model compared to the 2D thermal image. Results from this research can find applications in home automation, heat automation in smart homes, and energy management in residential spaces. Numéro de notice : A2021-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21030988 Date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21030988 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97075
in Sensors > vol 21 n° 3 (February 2021) . - n° 988[article]Emotional habitat: mapping the global geographic distribution of human emotion with physical environmental factors using a species distribution model / Yizhuo Li in International journal of geographical information science IJGIS, vol 35 n° 2 (February 2021)
PermalinkUnsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)
PermalinkImproving traffic sign recognition results in urban areas by overcoming the impact of scale and rotation / Roholah Yazdan in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)
PermalinkExploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)
PermalinkHierarchical instance recognition of individual roadside trees in environmentally complex urban areas from UAV laser scanning point clouds / Yongjun Wang in ISPRS International journal of geo-information, vol 9 n° 10 (October 2020)
PermalinkDeep learning for enrichment of vector spatial databases: Application to highway interchange / Guillaume Touya in ACM Transactions on spatial algorithms and systems, TOSAS, vol 6 n° 3 (May 2020)
![]()
PermalinkClassification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network / Yueguan Yan in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)
PermalinkDéveloppement de la photogrammétrie et d'analyses d'images pour l'étude et le suivi d'habitats marins / Guilhem Marre (2020)
PermalinkInteractions between hierarchical learning and visual system modeling : image classification on small datasets / Thalita Firmo Drumond (2020)
PermalinkExploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) / Wenzhi Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
Permalink