Détail de l'auteur
Auteur Naina Dhingra |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Titre : Scene understanding and gesture recognition for human-machine interaction Type de document : Thèse/HDR Auteurs : Naina Dhingra, Auteur Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2022 Note générale : Bibliographie
A dissertation submitted to attain the degree of Doctor of Sciences of ETH ZurichLangues : Français (fre) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] compréhension de l'image
[Termes IGN] image RVB
[Termes IGN] interaction homme-machine
[Termes IGN] oculométrie
[Termes IGN] reconnaissance automatique
[Termes IGN] reconnaissance de formes
[Termes IGN] reconnaissance de gestes
[Termes IGN] réseau neuronal récurrent
[Termes IGN] scène
[Termes IGN] vision par ordinateurRésumé : (auteur) Scene understanding and gesture recognition are useful for a myriad of applications such as human-robotic interaction, assisting blind and visually impaired people, advanced driver assistance systems, and autonomous driving. To work autonomously in real-world environments, automatic systems need to deliver non-verbal information to enhance the verbal communication in particular for blind people. We are exploring the holistic approach for providing the scene as well as gesture related information. We propose that incorporating attention mechanisms in neural networks which behave similarly to attention in the human brain, and conducting an integrated study using neural networks in real-time can yield significant improvements in the scene and gesture understanding, thereby enhancing the user experience. In this thesis, we investigate the understanding of visual scenes and gestures. We explore these two areas, in particular, by proposing novel architectures, training methods, user studies, and thorough evaluations. We show that, for deep learning approaches, attention or self attention mechanisms improve and push the boundaries of network performance for different tasks in consideration. We suggest that the various kinds of gestures can complement and supplement each other’s information to better understand non-verbal conversation; hence integrated gestures comprehension is useful. First, we focus on visual scene understanding using scene graph generation. We propose, BGT-Net, a new network that uses an object detection model with 1) bidirectional gated recurrent units for object-object communication and 2) transformer encoders including self attention to classify the objects and their relationships. We address the problem of bias caused by the long tailed distribution in the dataset. This enables the network to perform even for the unseen objects or relationships in the dataset. Second, we propose to learn hand gesture recognition from RGB and RGB-D videos using attention learning. We present a novel architecture based on residual connections and an attention mechanism. Our approach successfully detects hand gestures when evaluated on three open-source datasets. Third, we explore pointing gesture recognition and localization using open-source software, i.e. OpenPtrack which uses a deep learning based iii network to track multi-persons in the scene. We use a Kinect sensor as an input device and conduct a user study with 26 users to evaluate the system using two setup types. Fourth, we propose a technique to perform eye gaze tracking using OpenFace which is based on a deep learning model and RGB webcam. We use support vector machine regression to estimate the position of eye gaze on the screen. In a study, we evaluate the system with 28 users and show that this system can perform similarly to commercially expensive eye trackers. Finally, we focus on 3D head pose estimation using two models: 1)headPosr includes residual connections for the base network followed by a transformer encoder. It outperforms existing models but has a drawback of being computationally expensive; 2) lwPosr uses depthwise separable convolutions and transformer encoders. It is a two stream network in fine-grained fashion to estimate the three angles of the head pose. We demonstrate that this method is able to predict head poses better than state-of-the-art lightweight networks. Note de contenu : 1- Introduction
2- Background
3- State of the art
4- Scene graph generation
5- 3D hand gesture recognition
6- Pointing gesture recognition
7- Eye-gaze tracking
8- Head pose estimation
9- Lightweight head pose estimation
10- SummaryNuméro de notice : 24039 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD Thesis : Sciences : ETH Zurich :2022 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/559347 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101876