Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > reconnaissance de formes
reconnaissance de formesSynonyme(s)reconnaissance des formes |
Documents disponibles dans cette catégorie (215)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : Multi-scale point cloud analysis Titre original : Analyse multi-échelle de nuage de points Type de document : Thèse/HDR Auteurs : Thibault Lejemble, Auteur ; Loïc Barthe, Directeur de thèse Editeur : Toulouse : Université de Toulouse 3 Paul Sabatier Année de publication : 2020 Importance : 142 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse en vue du Doctorat de l'Université de Toulouse en Informatique et TélécommunicationsLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse multiéchelle
[Termes IGN] analyse multirésolution
[Termes IGN] anisotropie
[Termes IGN] approche hiérarchique
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction automatique
[Termes IGN] géométrie différentielle
[Termes IGN] graphe
[Termes IGN] reconnaissance de formes
[Termes IGN] segmentation en plan
[Termes IGN] segmentation en régions
[Termes IGN] semis de points
[Termes IGN] visualisation 3DIndex. décimale : THESE Thèses et HDR Résumé : (auteur) 3D acquisition techniques like photogrammetry and laser scanning are commonly used in numerous fields such as reverse engineering, archeology, robotics and urban planning. The main objective is to get virtual versions of real objects in order to visualize, analyze and process them easily. Acquisition techniques become more and more powerful and affordable which creates important needs to process efficiently the resulting various and massive3D data. Data are usually obtained in the form of unstructured 3D point cloud sampling the scanned surface. Traditional signal processing methods cannot be directly applied due to the lack of spatial parametrization. Points are only represented by their 3D coordinates without any particular order. This thesis focuses on the notion of scale of analysis defined by the size of the neighborhood used to locally characterize the point-sampled surface. The analysis at different scales enables to consider various shapes which increases the analysis pertinence and the robustness to acquired data imperfections. We first present some theoretical and practical results on curvature estimation adapted to a multi-scale and multi-resolution representation of point clouds. They are used to develop multi-scale algorithms for the recognition of planar and anisotropic shapes such as cylinder sand feature curves. Finally, we propose to compute a global 2D parametrization of the underlying surface directly from the 3D unstructured point cloud. Note de contenu : Introduction
1- Multi-scale differential analysis of point clouds
2- Plane detection using persistence analysis of graph
3- An isotropic features detection using curvature lines
4- Point cloud parametrization
ConclusionNuméro de notice : 28583 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique et Télécommunications : Toulouse 3 : 2020 Organisme de stage : Institut de recherche en informatique de Toulouse En ligne : https://tel.archives-ouvertes.fr/tel-03170824/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97923 Validating the correct wearing of protection mask by taking a selfie: design of a mobile application "CheckYourMask" to limit the spread of COVID-19 / Karim Hammoudi (2020)
Titre : Validating the correct wearing of protection mask by taking a selfie: design of a mobile application "CheckYourMask" to limit the spread of COVID-19 Type de document : Article/Communication Auteurs : Karim Hammoudi , Auteur ; Adnane Cabani, Auteur ; Halim Benhabiles, Auteur ; Mahmoud Melkemi, Auteur Editeur : Paris : HAL Année de publication : 2020 Importance : 6 p. Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] application informatique
[Termes IGN] interface mobile
[Termes IGN] maladie infectieuse
[Termes IGN] prévention
[Termes IGN] reconnaissance de formes
[Termes IGN] santé
[Termes IGN] téléphonie mobileRésumé : (Auteur) In a context of a virus that is transmissive by sputtering, wearing masks appear necessary to protect the wearer and to limit the propagation of the disease. Currently, we are facing the 2019-20 coronavirus pandemic. Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. COVID-19 appeared first in China and very quickly spreads to the rest of the world. The COVID-19 contagiousness is known to be high by comparison with the flu. In this paper, we propose a design of a mobile application for permitting to everyone having a smartphone and being able to take a picture to verify that his/her protection mask is correctly positioned on his/her face. Such application can be particularly useful for people using face protection mask for the first time and notably for children and old people. The designed method exploits Haar-like feature descriptors to detect key features of the face and a decision-making algorithm is applied. Experimental results show the potential of this method in the validation of the correct mask wearing. To the best of our knowledge, our work is the only one that currently proposes a mobile application design "CheckYourMask" for validating the correct wearing of protection mask. Numéro de notice : P2020-008 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Preprint nature-HAL : Préprint DOI : sans Date de publication en ligne : 21/05/2020 En ligne : https://hal.science/hal-02614790 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95115 Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) / Wenzhi Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) Type de document : Article/Communication Auteurs : Wenzhi Zhao, Auteur ; Yanchen Bo, Auteur ; Jiage Chen, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 237 - 250 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classe sémantique
[Termes IGN] compréhension de l'image
[Termes IGN] fusion de données
[Termes IGN] image à haute résolution
[Termes IGN] reconnaissance d'objets
[Termes IGN] scène urbaineRésumé : (Auteur) Urban scenes refer to city blocks which are basic units of megacities, they play an important role in citizens’ welfare and city management. Remote sensing imagery with largescale coverage and accurate target descriptions, has been regarded as an ideal solution for monitoring the urban environment. However, due to the heterogeneity of remote sensing images, it is difficult to access their geographical content at the object level, let alone understanding urban scenes at the block level. Recently, deep learning-based strategies have been applied to interpret urban scenes with remarkable accuracies. However, the deep neural networks require a substantial number of training samples which are hard to satisfy, especially for high-resolution images. Meanwhile, the crowed-sourced Open Street Map (OSM) data provides rich annotation information about the urban targets but may encounter the problem of insufficient sampling (limited by the places where people can go). As a result, the combination of OSM and remote sensing images for efficient urban scene recognition is urgently needed. In this paper, we present a novel strategy to transfer existing OSM data to high-resolution images for semantic element determination and urban scene understanding. To be specific, the object-based convolutional neural network (OCNN) can be utilized for geographical object detection by feeding it rich semantic elements derived from OSM data. Then, geographical objects are further delineated into their functional labels by integrating points of interest (POIs), which contain rich semantic terms, such as commercial or educational labels. Lastly, the categories of urban scenes are easily acquired from the semantic objects inside. Experimental results indicate that the proposed method has an ability to classify complex urban scenes. The classification accuracies of the Beijing dataset are as high as 91% at the object-level and 88% at the scene level. Additionally, we are probably the first to investigate the object level semantic mapping by incorporating high-resolution images and OSM data of urban areas. Consequently, the method presented is effective in delineating urban scenes that could further boost urban environment monitoring and planning with high-resolution images. Numéro de notice : A2019-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.03.019 Date de publication en ligne : 29/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.03.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92675
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 237 - 250[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments Type de document : Article/Communication Auteurs : Zhipeng Luo, Auteur ; Jonathan Li, Auteur ; Zhenlong Xiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 44 - 58 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] jointure spatiale
[Termes IGN] objet 3D
[Termes IGN] reconnaissance d'objets
[Termes IGN] représentation multiple
[Termes IGN] réseau neuronal convolutif
[Termes IGN] semis de pointsRésumé : (Auteur) Most existing 3D object recognition methods still suffer from low descriptiveness and weak robustness although remarkable progress has made in 3D computer vision. The major challenge lies in effectively mining high-level 3D shape features. This paper presents a high-level feature learning framework for 3D object recognition through fusing multiple 2D representations of point clouds. The framework has two key components: (1) three discriminative low-level 3D shape descriptors for obtaining multi-view 2D representation of 3D point clouds. These descriptors preserve both local and global spatial relationships of points from different perspectives and build a bridge between 3D point clouds and 2D Convolutional Neural Networks (CNN). (2) A two-stage fusion network, which consists of a deep feature learning module and two fusion modules, for extracting and fusing high-level features. The proposed method was tested on three datasets, one of which is Sydney Urban Objects dataset and the other two were acquired by a mobile laser scanning (MLS) system along urban roads. The results obtained from comprehensive experiments demonstrated that our method is superior to the state-of-the-art methods in descriptiveness, robustness and efficiency. Our method achieves high recognition rates of 94.6%, 93.1% and 74.9% on the above three datasets, respectively. Numéro de notice : A2019-137 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.024 Date de publication en ligne : 16/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.024 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92468
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 44 - 58[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)
[article]
Titre : Learning to segment moving objects Type de document : Article/Communication Auteurs : Pavel Tokmakov, Auteur ; Cordelia Schmid, Auteur ; Karteek Alahari, Auteur Année de publication : 2019 Article en page(s) : pp 282 - 301 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] image vidéo
[Termes IGN] objet mobile
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal convolutif
[Termes IGN] séquence d'imagesRésumé : (Auteur) We study the problem of segmenting moving objects in unconstrained videos. Given a video, the task is to segment all the objects that exhibit independent motion in at least one frame. We formulate this as a learning problem and design our framework with three cues: (1) independent object motion between a pair of frames, which complements object recognition, (2) object appearance, which helps to correct errors in motion estimation, and (3) temporal consistency, which imposes additional constraints on the segmentation. The framework is a two-stream neural network with an explicit memory module. The two streams encode appearance and motion cues in a video sequence respectively, while the memory module captures the evolution of objects over time, exploiting the temporal consistency. The motion stream is a convolutional neural network trained on synthetic videos to segment independently moving objects in the optical flow field. The module to build a “visual memory” in video, i.e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences. For every pixel in a frame of a test video, our approach assigns an object or background label based on the learned spatio-temporal features as well as the “visual memory” specific to the video. We evaluate our method extensively on three benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and SegTrack. In addition, we provide an extensive ablation study to investigate both the choice of the training data and the influence of each component in the proposed framework. Numéro de notice : A2018-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1122-2 Date de publication en ligne : 22/09/2018 En ligne : https://doi.org/10.1007/s11263-018-1122-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92528
in International journal of computer vision > vol 127 n° 3 (March 2019) . - pp 282 - 301[article]PermalinkSpatially sensitive statistical shape analysis for pedestrian recognition from LIDAR data / Michalis A. Savelonas in Computer Vision and image understanding, vol 171 (June 2018)PermalinkDo semantic parts emerge in convolutional neural networks? / Abel Gonzalez-Garcia in International journal of computer vision, vol 126 n° 5 (May 2018)PermalinkFine-grained object recognition and zero-shot learning in remote sensing imagery / Gencer Sumbul in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)PermalinkRecognition of building group patterns in topographic maps based on graph partitioning and random forest / Xianjin He in ISPRS Journal of photogrammetry and remote sensing, vol 136 (February 2018)PermalinkA typification method for linear pattern in urban building generalisation / Xianyong Gong in Geocarto international, vol 33 n° 2 (February 2018)PermalinkPermalinkLocalisation d'objets urbains à partir de sources multiples dont des images aériennes / Lionel Pibre (2018)PermalinkPermalinkMachine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)Permalink