Descripteur
Documents disponibles dans cette catégorie (42)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Real-time multimodal semantic scene understanding for autonomous UGV navigation / Yifei Zhang (2021)
Titre : Real-time multimodal semantic scene understanding for autonomous UGV navigation Type de document : Thèse/HDR Auteurs : Yifei Zhang, Auteur ; Fabrice Mériaudeau, Directeur de thèse ; Désiré Sidibé, Directeur de thèse Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2021 Importance : 114 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse pour obtenir le doctorat de l'Université Bourgogne Franche-Comté, Spécialité Instrumentation et informatique d’imageLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] données polarimétriques
[Termes IGN] fusion d'images
[Termes IGN] image RVB
[Termes IGN] intégration de données
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] temps réel
[Termes IGN] véhicule sans piloteIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Robust semantic scene understanding is challenging due to complex object types, as well as environmental changes caused by varying illumination and weather conditions. This thesis studies the problem of deep semantic segmentation with multimodal image inputs. Multimodal images captured from various sensory modalities provide complementary information for complete scene understanding. We provided effective solutions for fully-supervised multimodal image segmentation and few-shot semantic segmentation of the outdoor road scene. Regarding the former case, we proposed a multi-level fusion network to integrate RGB and polarimetric images. A central fusion framework was also introduced to adaptively learn the joint representations of modality-specific features and reduce model uncertainty via statistical post-processing.In the case of semi-supervised semantic scene understanding, we first proposed a novel few-shot segmentation method based on the prototypical network, which employs multiscale feature enhancement and the attention mechanism. Then we extended the RGB-centric algorithms to take advantage of supplementary depth cues. Comprehensive empirical evaluations on different benchmark datasets demonstrate that all the proposed algorithms achieve superior performance in terms of accuracy as well as demonstrating the effectiveness of complementary modalities for outdoor scene understanding for autonomous navigation. Note de contenu : 1. Introduction
1.1 Context and Motivation
1.2 Background and Challenges
1.3 Contributions
1.4 Organization
2. Background on Neural Networks
2.1 Basic Concepts
2.2 Neural Network Layers
2.3 Optimization
2.4 Model Training
2.5 Evaluation Metrics
2.6 Summary
3. Literature Review
3.1 Fully-supervised Semantic Image
3.2 Datasets
3.3 Summary
4. Deep Multimodal Fusion for Semantic Image Segmentation
4.1 CMNet: Deep Multimodal Fusion
4.2 A Central Multimodal Fusion Framework
4.3 Summary
5. Few-shot Semantic Image Segmentation
5.1 Introduction on Few-shot Segmentation
5.2 MAPnet: A Multiscale Attention-Based Prototypical Network
5.3 RDNet: Incorporating Depth Information into Few-shot Segmentation
5.4 Summary
6. Conclusion and Future Work
6.1 General Conclusion
6.2 Future PerspectivesNuméro de notice : 26527 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Instrumentation et informatique d’image : Bourgogne : 2021 nature-HAL : Thèse Date de publication en ligne : 02/03/2021 En ligne : https://hal.science/tel-03154783v1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97556 A history of laser scanning, Part 1: space and defense applications / Adam P. Spring in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 7 (July 2020)
[article]
Titre : A history of laser scanning, Part 1: space and defense applications Type de document : Article/Communication Auteurs : Adam P. Spring, Auteur Année de publication : 2020 Article en page(s) : pp 419-429 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] balayage laser
[Termes IGN] capteur à balayage
[Termes IGN] défense nationale
[Termes IGN] histoire des sciences et techniques
[Termes IGN] navigation autonome
[Termes IGN] secteur spatial
[Termes IGN] semis de points
[Termes IGN] véhicule sans piloteRésumé : (Auteur) This article presents the origins and evolution of midrange terrestrial laser scanning (TLS), spanning primarily from the 1950s to the time of publication. Particular attention is given to developments in hardware and software that document the physical dimensions of a scene as a point cloud. These developments include parameters for accuracy, repeatability, and resolution in the midrange—millimeter and centimeter levels when recording objects at building and landscape scales up to a kilometer away. The article is split into two parts: Part one starts with early space and defense applications, and part two examines the survey applications that formed around TLS technologies in the 1990s. The origins of midrange TLS, ironically, begin in space and defense applications, which shaped the development of sensors and information processing via autonomous vehicles. Included are planetary rovers, space shuttles, robots, and land vehicles designed for relative navigation in hostile environments like space and war zones. Key people in the midrange TLS community were consulted throughout the 10-year period over which this article was written. A multilingual and multidisciplinary literature review—comprising media written or produced in Chinese, English, French, German, Japanese, Italian, and Russian—was also an integral part of this research. Numéro de notice : A2020-381 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.86.7.419 Date de publication en ligne : 01/07/2020 En ligne : https://doi.org/10.14358/PERS.86.7.419 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95426
in Photogrammetric Engineering & Remote Sensing, PERS > vol 86 n° 7 (July 2020) . - pp 419-429[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2020071 SL Revue Centre de documentation Revues en salle Disponible Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
Titre : Application of machine learning techniques for evidential 3D perception, in the context of autonomous driving Type de document : Thèse/HDR Auteurs : Edouard Capellier, Auteur ; Véronique Berge-Cherfaoui, Directeur de thèse ; Franck Davoine, Directeur de thèse Editeur : Compiègne : Université de Technologie de Compiègne UTC Année de publication : 2020 Importance : 123 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur de l'UTC, Robotique et Sciences et Technologies de l'Information et des SystèmesLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] carte routière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] image RVB
[Termes IGN] intelligence artificielle
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The perception task is paramount for self-driving vehicles. Being able to extract accurate and significant information from sensor inputs is mandatory, so as to ensure a safe operation. The recent progresses of machine-learning techniques revolutionize the way perception modules, for autonomous driving, are being developed and evaluated, while allowing to vastly overpass previous state-of-the-art results in practically all the perception-related tasks. Therefore, efficient and accurate ways to model the knowledge that is used by a self-driving vehicle is mandatory. Indeed, self-awareness, and appropriate modeling of the doubts, are desirable properties for such system. In this work, we assumed that the evidence theory was an efficient way to finely model the information extracted from deep neural networks. Based on those intuitions, we developed three perception modules that rely on machine learning, and the evidence theory. Those modules were tested on real-life data. First, we proposed an asynchronous evidential occupancy grid mapping algorithm, that fused semantic segmentation results obtained from RGB images, and LIDAR scans. Its asynchronous nature makes it particularly efficient to handle sensor failures. The semantic information is used to define decay rates at the cell level, and handle potentially moving object. Then, we proposed an evidential classifier of LIDAR objects. This system is trained to distinguish between vehicles and vulnerable road users, that are detected via a clustering algorithm. The classifier can be reinterpreted as performing a fusion of simple evidential mass functions. Moreover, a simple statistical filtering scheme can be used to filter outputs of the classifier that are incoherent with regards to the training set, so as to allow the classifier to work in open world, and reject other types of objects. Finally, we investigated the possibility to perform road detection in LIDAR scans, from deep neural networks. We proposed two architectures that are inspired by recent state-of-the-art LIDAR processing systems. A training dataset was acquired and labeled in a semi-automatic fashion from road maps. A set of fused neural networks reaches satisfactory results, which allowed us to use them in an evidential road mapping and object detection algorithm, that manages to run at 10 Hz Note de contenu : 1- Introduction
2- Machine learning for perception in autonomous driving
3- The evidence theory, and its applications in autonomous driving
4- A synchronous evidential grid mapping from RGB images and LIDAR scans
5- Evidential LIDAR object classification
6- Road detection in LIDAR scans
7- Application of RoadSeg:evidential road surface mapping
8- ConclusionNuméro de notice : 25895 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Robotique et Sciences et Technologies de l'Information et des Systèmes : UTC : 2020 Organisme de stage : Laboratoire Heudiasyc nature-HAL : Thèse DOI : sans En ligne : https://hal.science/tel-02897810v1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96013
Titre : Artificial intelligence applications to smart city and smart enterprise Type de document : Monographie Auteurs : Donato Impedovo, Éditeur scientifique ; Giuseppe Pirlo, Éditeur scientifique Editeur : Bâle [Suisse] : Multidisciplinary Digital Publishing Institute MDPI Année de publication : 2020 Importance : 374 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-3-03936-438-1 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] algorithme génétique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] gestion urbaine
[Termes IGN] Inférence floue
[Termes IGN] métadonnées
[Termes IGN] navigation autonome
[Termes IGN] planification urbaine
[Termes IGN] système de transport intelligent
[Termes IGN] trafic routier
[Termes IGN] ville intelligente
[Termes IGN] vision par ordinateurRésumé : (éditeur) Smart cities operate under more resource-efficient management and economy than ordinary cities. As such, advanced business models have emerged around smart cities, which led to the creation of smart enterprises and organizations that depend on advanced technologies. This book includes 21 selected and peer-reviewed articles contributed in the wide spectrum of artificial intelligence applications to smart cities. Chapters refer to the following areas of interest: vehicular traffic prediction, social big data analysis, smart city management, driving and routing, localization, safety, health, and life quality. Note de contenu : 1- Artificial intelligence applications to smart city and smart enterprise
2- Global spatial-temporal graph convolutional network for urban traffic speed prediction
3- TrafficWave: Generative deep learning architecture for vehicular traffic flow prediction
4- Grassmann manifold based state analysis method of traffic surveillance video
5- Improved spatio-temporal residual networks for bus traffic flow prediction
6- Sehaa: A big data analytics tool for healthcare symptoms and diseases detection using Twitter, Apache Spark, and machine learning
7- Smart cities big data algorithms for sensors location
8- Managing a smart city integrated model through smart program management
9- Conceptual framework of an intelligent decision support system for smart city
disaster management
10- Vision-based potential pedestrian risk analysis on unsignalized crosswalk using data mining techniques
11- Development of deep learning based human-centered threat assessment for application to automated driving vehicle
12- Modeling and solution of the routing problem in vehicular Delay-Tolerant networks: A dual, deep learning perspective
13- “Texting & Driving” detection using deep convolutional neural networks
14- Deep learning system for vehicular re-routing and congestion avoidance
15- Identifying foreign tourists’ nationality from mobility traces via LSTM neural network and location embeddings
16- Feature adaptive and cyclic dynamic learning based on infinite term memory extreme learning machine
17- LSTM DSS automatism and dataset optimization for diabetes prediction
18- Convolutional models for the detection of firearms in surveillance videos
19- PARNet: A joint loss function and dynamic weights network for pedestrian semantic attributes recognition of smart surveillance image
20- Supervised machine-learning predictive analytics for national quality of life scoring
21- Bacterial foraging-based algorithm for optimizing the powerGeneration of an isolated microgrid
22- Optimizgtion of EPB shield performance with adaptive neuro-fuzzy inference system and Genetic algorithmNuméro de notice : 28448 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE/URBANISME Nature : Recueil / ouvrage collectif DOI : 10.3390/books978-3-03936-438-1 En ligne : https://doi.org/10.3390/books978-3-03936-438-1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98929
Titre : Intelligent processing on image and optical information Type de document : Monographie Auteurs : Seakwon Yeom, Éditeur scientifique Editeur : Bâle [Suisse] : Multidisciplinary Digital Publishing Institute MDPI Année de publication : 2020 Importance : 324 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-3-03936-945-4 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de lignes
[Termes IGN] apprentissage automatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image hyperspectrale
[Termes IGN] navigation autonome
[Termes IGN] optimisation (mathématiques)
[Termes IGN] réseau antagoniste génératif
[Termes IGN] segmentation d'imageRésumé : (éditeur) This book focuses on the intelligent processing of images and optical information acquired by various imaging methods. Intelligent image and optical information processing have paved the way for the recent epoch of new intelligence and information era. Certainly, information acquired by various imaging techniques is of tremendous value; thus, an intelligent analysis of them is necessary to make the best use of it. A broad range of research fields is included in this book. Many studies focus on object classification and detection. Registration, segmentation, and fusion are performed between a series of images. Many valuable and up-to-most recent technologies are provided to solve the real problems in selected papers. Note de contenu : 1- Special issue on intelligent processing on image and optical information
2- Change detection of water resources via remote sensing: An L-V-NSCT approach
3- A texture classification approach based on the integrated optimization for parameters and features of gabor filter via hybrid ant lion optimizer
4- Real-time automated segmentation and classification of calcaneal fractures in CT images
5- Automatic zebrafish egg phenotype recognition from bright-field microscopic images using deep convolutional neural network
6- Zebrafish larvae phenotype classification from bright-field microscopic images using a two-tier deep-learning pipeline
7- Unsupervised generation and synthesis of facial images via an auto-encoder-based deep generative adversarial network
8- Detecting green mold pathogens on lemons using hyperspectral images
9- Review on computer aided weld defect detection from radiography images
10- Feature extraction with discrete non-separable shearlet transform and its application to surface inspection of continuous casting slabs
11- A novel extraction method for wildlife monitoring images with wireless multimedia sensor
networks (WMSNs)
12- IMU-aided high-frequency Lidar odometry for autonomous driving
13- Determination of the optimal state of dough fermentation in bread production by using optical sensors and deep learning
14- Multi-sensor face registration based on global and local structures
15- Multifocus image fusion using a sparse and low-rank matrix decomposition for aviator’s night vision Goggle
16- Error resilience for block compressed sensing with multiple-channel transmission
17- Image completion with hybrid interpolation in tensor representation
18- A correction method for heat wave distortion in digital image correlation measurements
based on background-oriented schlieren
19- An effective optimization method for machine learning based on ADAM
20- Boundary matching and interior connectivity-based cluster validity analysisNuméro de notice : 28438 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Recueil / ouvrage collectif DOI : 10.3390/books978-3-03936-945-4 En ligne : https://doi.org/10.3390/books978-3-03936-945-4 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98875 PermalinkExperimental results of multipath behavior for GPS L1-L2 and Galileo E1-E5b in static and kinematic scenarios / Alexandra Avram in Journal of applied geodesy, Vol 13 n° 4 (October 2019)PermalinkTowards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data / Citlalli Gamez Serna (2019)PermalinkDe la navigation connectée à la voiture autonome - partie 2 Et mon tout est un véhicule autonome / Hubert d' Erceville in SIGmag, n° 17 (juin 2018)PermalinkDe la navigation connectée à la voiture autonome - partie 1 la navigation au coeur de l'automobile / Hubert d' Erceville in SIGmag, n° 16 (mars 2018)PermalinkMachine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)PermalinkPermalinkVision stéréoscopique temps-réel pour la navigation autonome d'un robot en environnement dynamique / Maxime Derome (2017)PermalinkAutonomous navigation in complex nonplanar environments based on laser ranging / Philipp Andreas Krüsi (2016)PermalinkCorrection de nuages de points lidar embarqué sur véhicule pour la reconstruction d’environnement 3D vaste / Pierre Merriaux (2016)PermalinkAlgorithms for vision-based path following along previously taught paths / Deon George Sabatta (2015)PermalinkGeneration of an integrated 3D city model with visual landmarks for autonomous navigation in dense urban areas / Bahman Soheilian (June 2013)PermalinkJNRR'03, quatrièmes journées nationales de recherche en robotique, 8 - 10 Octobre 2003, Clermont-Ferrand, France / Philippe Bidaud (2003)PermalinkContribution à la modélisation topologique par vision 2D et 3D pour la navigation d'un robot mobile sur terrain naturel / Carlos Alberto Parra Rodriguez (1999)PermalinkEin Modell für die hochgenaue Navigation autonomer flächenbeweglicher Fahrzeuge / J. Kusche (1994)Permalink