Détail de l'auteur
Auteur Franck Davoine |
Documents disponibles écrits par cet auteur (3)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
Titre : Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving Type de document : Thèse/HDR Auteurs : Edouard Capellier, Auteur ; Véronique Berge-Cherfaoui, Directeur de thèse ; Franck Davoine, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2020 Importance : 123 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur de l'UTC, Robotique et Sciences et Technologies de l'Information et des SystèmesLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] carte routière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] image RVB
[Termes IGN] intelligence artificielle
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The perception task is paramount for self-driving vehicles. Being able to extract accurate and significant information from sensor inputs is mandatory, so as to ensure a safe operation. The recent progresses of machine-learning techniques revolutionize the way perception modules, for autonomous driving, are being developed and evaluated, while allowing to vastly overpass previous state-of-the-art results in practically all the perception-related tasks. Therefore, efficient and accurate ways to model the knowledge that is used by a self-driving vehicle is mandatory. Indeed, self-awareness, and appropriate modeling of the doubts, are desirable properties for such system. In this work, we assumed that the evidence theory was an efficient way to finely model the information extracted from deep neural networks. Based on those intuitions, we developed three perception modules that rely on machine learning, and the evidence theory. Those modules were tested on real-life data. First, we proposed an asynchronous evidential occupancy grid mapping algorithm, that fused semantic segmentation results obtained from RGB images, and LIDAR scans. Its asynchronous nature makes it particularly efficient to handle sensor failures. The semantic information is used to define decay rates at the cell level, and handle potentially moving object. Then, we proposed an evidential classifier of LIDAR objects. This system is trained to distinguish between vehicles and vulnerable road users, that are detected via a clustering algorithm. The classifier can be reinterpreted as performing a fusion of simple evidential mass functions. Moreover, a simple statistical filtering scheme can be used to filter outputs of the classifier that are incoherent with regards to the training set, so as to allow the classifier to work in open world, and reject other types of objects. Finally, we investigated the possibility to perform road detection in LIDAR scans, from deep neural networks. We proposed two architectures that are inspired by recent state-of-the-art LIDAR processing systems. A training dataset was acquired and labeled in a semi-automatic fashion from road maps. A set of fused neural networks reaches satisfactory results, which allowed us to use them in an evidential road mapping and object detection algorithm, that manages to run at 10 Hz Note de contenu : 1- Introduction
2- Machine learning for perception in autonomous driving
3- The evidence theory, and its applications in autonomous driving
4- A synchronous evidential grid mapping from RGB images and LIDAR scans
5- Evidential LIDAR object classification
6- Road detection in LIDAR scans
7- Application of RoadSeg:evidential road surface mapping
8- ConclusionNuméro de notice : 25895 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Robotique et Sciences et Technologies de l'Information et des Systèmes : UTC : 2020 Organisme de stage : Laboratoire Heudiasyc nature-HAL : Thèse DOI : sans En ligne : https://hal.science/tel-02897810v1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96013 Simultaneous Facial Action Tracking and Expression Recognition in the Presence of Head Motion / Fadi Dornaika in International journal of computer vision, vol 76 n°3 (March 2008)
[article]
Titre : Simultaneous Facial Action Tracking and Expression Recognition in the Presence of Head Motion Type de document : Article/Communication Auteurs : Fadi Dornaika , Auteur ; Franck Davoine, Auteur Année de publication : 2008 Article en page(s) : pp 257 - 281 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] reconnaissance facialeRésumé : (auteur) The recognition of facial gestures and expressions in image sequences is an important and challenging problem. Most of the existing methods adopt the following paradigm. First, facial actions/features are retrieved from the images, then the facial expression is recognized based on the retrieved temporal parameters. In contrast to this mainstream approach, this paper introduces a new approach allowing the simultaneous retrieval of facial actions and expression using a particle filter adopting multi-class dynamics that are conditioned on the expression. For each frame in the video sequence, our approach is split into two consecutive stages. In the first stage, the 3D head pose is retrieved using a deterministic registration technique based on Online Appearance Models. In the second stage, the facial actions as well as the facial expression are simultaneously retrieved using a stochastic framework based on second-order Markov chains. The proposed fast scheme is either as robust as, or more robust than existing ones in a number of respects. We describe extensive experiments and provide evaluations of performance to show the feasibility and robustness of the proposed approach. Numéro de notice : A2008-638 Affiliation des auteurs : MATIS+Ext (1993-2011) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-007-0059-7 En ligne : https://doi.org/10.1007/s11263-007-0059-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103444
in International journal of computer vision > vol 76 n°3 (March 2008) . - pp 257 - 281[article]
Titre : Online appearance-based face and facial feature tracking Type de document : Article/Communication Auteurs : Fadi Dornaika , Auteur ; Franck Davoine, Auteur Editeur : New-York : IEEE Computer society Année de publication : 2004 Conférence : ICPR 2004, 17th IAPR International Conference on Pattern Recognition 23/08/2004 26/08/2004 Cambridge Royaume-Uni Proceedings IEEE Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] reconnaissance facialeRésumé : (auteur) We propose a simple framework that utilizes online appearance models for 3D face and facial feature tracking with a deformable model. Adapting the geometrical parameters for each frame adopts a steepest ascent method in the observation likelihood using a local exhaustive and directed search in the parameter space. The observation likelihood is based on the current appearance and the registered images. The developed framework is straightforward and has the following advantages. First, it does not require any a priori statistical facial texture. Second, it does not require any a priori transition model for the 3D motion. Video sequences featuring large head motions, large facial animations, and external illumination variations are successfully tracked, which demonstrate the efficiency of the developed framework. Numéro de notice : C2004-049 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICPR.2004.1334653 Date de publication en ligne : 20/09/2004 En ligne : https://doi.org/10.1109/ICPR.2004.1334653 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103049