Détail de l'éditeur
Université de Technologie de Compiègne UTC
localisé à :
Compiègne
|
Documents disponibles chez cet éditeur (3)



Titre : A world model enabling information integrity for autonomous vehicles Type de document : Thèse/HDR Auteurs : Corentin Sanchez, Auteur ; Philippe Bonnifait, Directeur de thèse ; Philippe Xu, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2022 Importance : 198 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse de Doctorat de l'Université de Technologie de Compiègne, Spécialité Automatique et RobotiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] carte routière
[Termes IGN] données multisources
[Termes IGN] information sémantique
[Termes IGN] intégrité des données
[Termes IGN] milieu urbain
[Termes IGN] navigation autonome
[Termes IGN] raisonnement
[Termes IGN] réseau routier
[Termes IGN] robot mobile
[Termes IGN] sécurité routière
[Termes IGN] véhicule sans pilote
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) To drive in complex urban environments, autonomous vehicles need to understand their driving context. This task, also known as the situation awareness, relies on an internal virtual representation of the world made by the vehicle, called world model. This representation is generally built from information provided by multiple sources. High definition navigation maps supply prior information such as road network topology, geometric description of the carriageway, and semantic information including traffic laws. The perception system provides a description of the space and of road users evolving in the vehicle surroundings. Conjointly, they provide representations of the environment (static and dynamic) and allow to model interactions. In complex situations, a reliable and non-misleading world model is mandatory to avoid inappropriate decision-making and to ensure safety. The goal of this PhD thesis is to propose a novel formalism on the concept of world model that fulfills the situation awareness requirements for an autonomous vehicle. This world model integrates prior knowledge on the road network topology, a lane-level grid representation, its prediction over time and more importantly a mechanism to control and monitor the integrity of information. The concept of world model is present in many autonomous vehicle architectures but may take many various forms and sometimes only implicitly. In some work, it is part of the perception process when in some other it is part of a decisionmaking process. The first contribution of this thesis is a survey on the concept of world model for autonomous driving covering different levels of abstraction for information representation and reasoning. Then, a novel representation is proposed for the world model at the tactical level combining dynamic objects and spatial occupancy information. First, a graph based top-down approach using a high-definition map is proposed to extract the areas of interests with respect to the situation from the vehicle's perspective. It is then used to build a Lane Grid Map (LGM), which is an intermediate space state representation from the ego-vehicle point of view. A top-down approach is chosen to assess and characterize the relevant information of the situation. Additionally to classical free-occupied states, the unknown state is further characterized by the notions of neutralized and safe areas that provide a deeper level of understanding of the situation. Another contribution to the world model is an integrity management mechanism that is built upon the LGM representation. It consists in managing the spatial sampling of the grid cells in order to take into account localization and perception errors and to avoid misleading information. Regardless of the confidence on localization and perception information, the LGM is capable of providing reliable information to decision making in order not to take hazardous decisions.The last part of the situation awareness strategy is the prediction of the world model based on the LGM representation. The main contribution is to show how a classical object-level prediction fits this representation and that the integrity can also be extended at the prediction stage. It is also depicted how a neutralized area can be used in the prediction stage to provide a better situation prediction. The work relies on experimental data in order to demonstrate a real application of a complex situation awareness representation. The approach is evaluated with real data obtained thanks to several experimental vehicles equipped with LiDAR sensors and IMU with RTK corrections in the city of Compi_egne. A high-definition map has also been used in the framework of the SIVALab joint laboratory between Renault and Heudiasyc CNRS-UTC. The world model module has been implemented (with ROS software) in order to fulfll real-time application and is functional on the experimental vehicles for live demonstrations. Note de contenu : General introduction
1- World model for autonomous vehicules
2- An architecture for WM
3- A lane level world model
4- Set-based LGM prediction
General conclusionNuméro de notice : 24089 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Automatique et Robotique : UTC Compiègne : 2022 Organisme de stage : Laboratoire Heudiasyc DOI : sans En ligne : https://www.theses.fr/2022COMP2683 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102509 Deep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)
![]()
Titre : Deep convolutional neural networks for scene understanding and motion planning for self-driving vehicles Type de document : Thèse/HDR Auteurs : Abdelhak Loukkal, Auteur ; Yves Grandvalet, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2021 Importance : 129 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée pour l’obtention du grade de Docteur de l’UTC, spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] compréhension de l'image
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] fusion de données multisource
[Termes IGN] navigation autonome
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation sémantique
[Termes IGN] système de navigation
[Termes IGN] véhicule automobile
[Termes IGN] vision monoculaire
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) During this thesis, some perception approaches for self-driving vehicles were developed using de convolutional neural networks applied to monocular camera images and High-Definition map (HD-ma rasterized images. We focused on camera-only solutions instead of leveraging sensor fusion with rang sensors because cameras are the most cost-effective and discrete sensors. The objective was also to show th camera-based approaches can perform at par with LiDAR-based solutions on certain 3D vision tasks. Rea world data was used for training and evaluation of the developed approaches but simulation was als leveraged when annotated data was lacking or for safety reasons when evaluating driving capabilities. Cameras provide visual information in a projective space where the perspective effect does not preserve th distances homogeneity. Scene understanding tasks such as semantic segmentation are then often operated i the camera-view space and then projected to 3D using a precise depth sensor such as a LiDAR. Having thi scene understanding in the 3D space is useful because the vehicles evolve in the 3D world and the navigatio algorithms reason in this space. Our focus was then to leverage the geometric knowledge about the camer parameters and its position in the 3D world to develop an approach that allows scene understanding in the 3D space using only a monocular image as input. Neural networks have also proven to be useful for more than just perception and are more and more used fo the navigation and planning tasks that build on the perception outputs. Being able to output 3D scen understanding information from a monocular camera has also allowed us to explore the possibility of havin an end-to-end holistic neural network that takes a camera image as input, extracts intermediate semantic information in the 3D space and then lans the vehicle's trajectory. Note de contenu : 1. Introduction
1.1 General context
1.2 Framework and objectives
1.3 Organization and contributions of the thesis
2. Background and related work
2.1 Introduction
2.2 Autonomous driving perception datasets
2.3 Autonomous driving simulators
2.4 Semantic segmentation with CNNs
2.5 Monocular depth estimation with CNNs
2.6 Driving with imitation learning
2.7 Conclusion
3. Semantic segmentation using cartographic and depth maps
3.1 Introduction
3.2 Synthetic dataset
3.3 Proposed methods
3.4 Experiments
3.5 Conclusion
4. Disparity weighted loss for semantic segmentation
4.1 Introduction
4.2 Disparity weighting for semantic segmentation
4.3 Experiments
4.4 Conclusion
5. FlatMobileNet: Bird-Eye-View semantic masks from a monoc?ular camera
5.1 Introduction
5.2 Theoretical framework
5.3 FlatMobile network: footprint segmentation
5.4 Conclusion
6. Driving among flatmobiles
6.1 Introduction
6.2 Encoder-decoder LSTM for trajectory planning
6.3 Experimental evaluation
6.4 Conclusion
7. Conclusion
7.1 Contributions
7.2 PerspectivesNuméro de notice : 26769 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Compiègne : 2021 Organisme de stage : Heuristique et Diagnostic des Systèmes Complexes HeuDiaSyC nature-HAL : Thèse DOI : sans Date de publication en ligne : 25/10/2021 En ligne : https://tel.hal.science/tel-03402541/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99871 Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
![]()
Titre : Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving Type de document : Thèse/HDR Auteurs : Edouard Capellier, Auteur ; Véronique Berge-Cherfaoui, Directeur de thèse ; Franck Davoine, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2020 Importance : 123 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur de l'UTC, Robotique et Sciences et Technologies de l'Information et des SystèmesLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] carte routière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] image RVB
[Termes IGN] intelligence artificielle
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The perception task is paramount for self-driving vehicles. Being able to extract accurate and significant information from sensor inputs is mandatory, so as to ensure a safe operation. The recent progresses of machine-learning techniques revolutionize the way perception modules, for autonomous driving, are being developed and evaluated, while allowing to vastly overpass previous state-of-the-art results in practically all the perception-related tasks. Therefore, efficient and accurate ways to model the knowledge that is used by a self-driving vehicle is mandatory. Indeed, self-awareness, and appropriate modeling of the doubts, are desirable properties for such system. In this work, we assumed that the evidence theory was an efficient way to finely model the information extracted from deep neural networks. Based on those intuitions, we developed three perception modules that rely on machine learning, and the evidence theory. Those modules were tested on real-life data. First, we proposed an asynchronous evidential occupancy grid mapping algorithm, that fused semantic segmentation results obtained from RGB images, and LIDAR scans. Its asynchronous nature makes it particularly efficient to handle sensor failures. The semantic information is used to define decay rates at the cell level, and handle potentially moving object. Then, we proposed an evidential classifier of LIDAR objects. This system is trained to distinguish between vehicles and vulnerable road users, that are detected via a clustering algorithm. The classifier can be reinterpreted as performing a fusion of simple evidential mass functions. Moreover, a simple statistical filtering scheme can be used to filter outputs of the classifier that are incoherent with regards to the training set, so as to allow the classifier to work in open world, and reject other types of objects. Finally, we investigated the possibility to perform road detection in LIDAR scans, from deep neural networks. We proposed two architectures that are inspired by recent state-of-the-art LIDAR processing systems. A training dataset was acquired and labeled in a semi-automatic fashion from road maps. A set of fused neural networks reaches satisfactory results, which allowed us to use them in an evidential road mapping and object detection algorithm, that manages to run at 10 Hz Note de contenu : 1- Introduction
2- Machine learning for perception in autonomous driving
3- The evidence theory, and its applications in autonomous driving
4- A synchronous evidential grid mapping from RGB images and LIDAR scans
5- Evidential LIDAR object classification
6- Road detection in LIDAR scans
7- Application of RoadSeg:evidential road surface mapping
8- ConclusionNuméro de notice : 25895 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Robotique et Sciences et Technologies de l'Information et des Systèmes : UTC : 2020 Organisme de stage : Laboratoire Heudiasyc nature-HAL : Thèse DOI : sans En ligne : https://hal.science/tel-02897810v1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96013