Descripteur



Etendre la recherche sur niveau(x) vers le bas
Semi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)
![]()
[article]
Titre : Semi-supervised joint learning for hand gesture recognition from a single color image Type de document : Article/Communication Auteurs : Chi Xu, Auteur ; Yunkai Jiang, Auteur ; Jun Zhou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1007 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] apprentissage semi-dirigé
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] image en couleur
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] reconnaissance de gestesRésumé : (auteur) Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task. Numéro de notice : A2021-160 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21031007 date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21031007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97076
in Sensors > vol 21 n° 3 (February 2021) . - n° 1007[article]3D hand mesh reconstruction from a monocular RGB image / Hao Peng in The Visual Computer, vol 36 n° 10 - 12 (October 2020)
![]()
[article]
Titre : 3D hand mesh reconstruction from a monocular RGB image Type de document : Article/Communication Auteurs : Hao Peng, Auteur ; Chuhua Xian, Auteur ; Yunbo Zhang, Auteur Année de publication : 2020 Article en page(s) : pp pages2227 - 2239 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] image de synthèse
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] maillage
[Termes descripteurs IGN] modélisation 3D
[Termes descripteurs IGN] réalité augmentée
[Termes descripteurs IGN] réalité virtuelle
[Termes descripteurs IGN] reconstruction 3D
[Termes descripteurs IGN] reconstruction d'objet
[Termes descripteurs IGN] vision monoculaireRésumé : (auteur) Most of the existing methods for 3D hand analysis based on RGB images mainly focus on estimating hand keypoints or poses, which cannot capture geometric details of the 3D hand shape. In this work, we propose a novel method to reconstruct a 3D hand mesh from a single monocular RGB image. Different from current parameter-based or pose-based methods, our proposed method directly estimates the 3D hand mesh based on graph convolution neural network (GCN). Our network consists of two modules: the hand localization and mask generation module, and the 3D hand mesh reconstruction module. The first module, which is a VGG16-based network, is applied to localize the hand region in the input image and generate the binary mask of the hand. The second module takes the high-order features from the first and uses a GCN-based network to estimate the coordinates of each vertex of the hand mesh and reconstruct the 3D hand shape. To achieve better accuracy, a novel loss based on the differential properties of the discrete mesh is proposed. We also use professional software to create a large synthetic dataset that contains both ground truth 3D hand meshes and poses for training. To handle the real-world data, we use the CycleGAN network to transform the data domain of real-world images to that of our synthesis dataset. We demonstrate that our method can produce accurate 3D hand mesh and achieve an efficient performance for real-time applications. Numéro de notice : A2020-596 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01908-3 date de publication en ligne : 14/07/2020 En ligne : https://doi.org/10.1007/s00371-020-01908-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95936
in The Visual Computer > vol 36 n° 10 - 12 (October 2020) . - pp pages2227 - 2239[article]Guided feature matching for multi-epoch historical image blocks pose estimation / Lulin Zhang in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
![]()
[article]
Titre : Guided feature matching for multi-epoch historical image blocks pose estimation Type de document : Article/Communication Auteurs : Lulin Zhang, Auteur ; Ewelina Rupnik , Auteur ; Marc Pierrot-Deseilligny
, Auteur
Année de publication : 2020 Projets : DISRUPT / Klinger, Yann Conférence : ISPRS 2020, Commission 2, virtual Congress, Imaging today foreseeing tomorrow 31/08/2020 02/09/2020 Nice (en ligne) France Annals Commission 2 Article en page(s) : pp 127 - 134 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes descripteurs IGN] analyse comparative
[Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] appariement de points
[Termes descripteurs IGN] bloc d'images
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] image aérienne
[Termes descripteurs IGN] mesure de similitude multidimensionnelle
[Termes descripteurs IGN] modèle numérique de surface
[Termes descripteurs IGN] Pézenas
[Termes descripteurs IGN] point d'appui
[Termes descripteurs IGN] point de liaison (imagerie)
[Termes descripteurs IGN] SIFT (algorithme)Résumé : (Auteur) Historical aerial imagery plays an important role in providing unique information about evolution of our landscapes. It possesses many positive qualities such as high spatial resolution, stereoscopic configuration and short time interval. Self-calibration reamains a main bottleneck for achieving the intrinsic value of historical imagery, as it involves certain underdeveloped research points such as detecting inter-epoch tie-points. In this research, we present a novel algorithm to detecting inter-epoch tie-points in historical images which do not rely on any auxiliary data. Using SIFT-detected keypoints we perform matching across epochs by interchangeably estimating and imposing that points follow two mathematical models: at first a 2D spatial similarity, then a 3D spatial similarity. We import GCPs to quantitatively evaluate our results with Digital Elevation Models (DEM) of differences (abbreviated as DoD) in absolute reference frame, and compare the results of our method with other 2 methods that use either the traditional SIFT or few virtual GCPs. The experiments show that far more correct inter-epoch tie-points can be extracted with our guided technique. Qualitative and quantitative results are reported. Numéro de notice : A2020-411 Affiliation des auteurs : LaSTIG MATIS (2012-2019) Autre URL associée : vers HAL Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-2-2020-127-2020 date de publication en ligne : 03/08/2020 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2020-127-2020 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95081
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > V-2 (August 2020) . - pp 127 - 134[article]Towards structureless bundle adjustment with two- and three-view structure approximation / Ewelina Rupnik in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
![]()
[article]
Titre : Towards structureless bundle adjustment with two- and three-view structure approximation Type de document : Article/Communication Auteurs : Ewelina Rupnik , Auteur ; Marc Pierrot-Deseilligny
, Auteur
Année de publication : 2020 Projets : 1-Pas de projet / Conférence : ISPRS 2020, Commission 2, virtual Congress, Imaging today foreseeing tomorrow 31/08/2020 02/09/2020 Nice (en ligne) France Annals Commission 2 Article en page(s) : pp 71 - 78 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] approximation
[Termes descripteurs IGN] compensation par faisceaux
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] structure-from-motionRésumé : (auteur) The global approaches solve SfM problems by independently inferring relative motions, followed be a sequential estimation of global rotations and translations. It is a fast approach but not optimal because it relies only on pairs and triplets of images and it is not a joint optimisation. In this publication we present a methodology that increases the quality of global solutions without the usual computational burden tied to the bundle adjustment. We propose an efficient structure approximation approach that relies on relative motions known upfront. Using the approximated structure, we are capable of refining the initial poses at very low computational cost. Compared to different benchmark datasets and software solutions, our approach improves the processing times while maintaining good accuracy. Numéro de notice : A2020-505 Affiliation des auteurs : LaSTIG (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-2-2020-71-2020 date de publication en ligne : 03/08/2020 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2020-71-2020 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95646
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > V-2 (August 2020) . - pp 71 - 78[article]Refractive two-view reconstruction for underwater 3D vision / François Chadebecq in International journal of computer vision, vol 128 n° 5 (May 2020)
![]()
[article]
Titre : Refractive two-view reconstruction for underwater 3D vision Type de document : Article/Communication Auteurs : François Chadebecq, Auteur ; Francisco Vasconcelos, Auteur ; René Lacher, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 1101 - 1117 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes descripteurs IGN] correction d'image
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] étalonnage d'instrument
[Termes descripteurs IGN] image sous-marine
[Termes descripteurs IGN] reconstruction 3D
[Termes descripteurs IGN] réfraction de l'eau
[Termes descripteurs IGN] structure-from-motion
[Termes descripteurs IGN] temps de pose
[Termes descripteurs IGN] vision stéréoscopiqueRésumé : (auteur) Recovering 3D geometry from cameras in underwater applications involves the Refractive Structure-from-Motion problem where the non-linear distortion of light induced by a change of medium density invalidates the single viewpoint assumption. The pinhole-plus-distortion camera projection model suffers from a systematic geometric bias since refractive distortion depends on object distance. This leads to inaccurate camera pose and 3D shape estimation. To account for refraction, it is possible to use the axial camera model or to explicitly consider one or multiple parallel refractive interfaces whose orientations and positions with respect to the camera can be calibrated. Although it has been demonstrated that the refractive camera model is well-suited for underwater imaging, Refractive Structure-from-Motion remains particularly difficult to use in practice when considering the seldom studied case of a camera with a flat refractive interface. Our method applies to the case of underwater imaging systems whose entrance lens is in direct contact with the external medium. By adopting the refractive camera model, we provide a succinct derivation and expression for the refractive fundamental matrix and use this as the basis for a novel two-view reconstruction method for underwater imaging. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate its practical application within laboratory settings and for medical applications in fluid-immersed endoscopy. We demonstrate our approach outperforms classic two-view Structure-from-Motion method relying on the pinhole-plus-distortion camera model. Numéro de notice : A2020-508 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-019-01218-9 date de publication en ligne : 18/11/2019 En ligne : https://doi.org/10.1007/s11263-019-01218-9 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96972
in International journal of computer vision > vol 128 n° 5 (May 2020) . - pp 1101 - 1117[article]Automatic scale estimation of structure from motion based 3D models using laser scalers in underwater scenarios / Klemen Istenič in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
PermalinkPermalinkRobust pose estimation and calibration of catadioptric cameras with spherical mirrors / Sagi Filin in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 1 (January 2020)
PermalinkSimulation and analysis of photogrammetric UAV image blocks: influence of camera calibration error / Yilin Zhou in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W5 (May 2019)
PermalinkBIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
PermalinkBIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
PermalinkEquivalent constraints for two-view geometry : Pose solution/pure rotation identification and 3D reconstruction / Qi Cai in International journal of computer vision, vol 127 n° 2 (February 2019)
PermalinkThe orthographic projection model for pose calibration of long focal images / Laura F. Julià in IPOL Journal, Image Processing On Line, vol 9 (2019)
PermalinkPermalinkPermalinkStructure from motion for ordered and unordered image sets based on random k-d forests and global pose estimation / Xin Wang in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)
PermalinkVision-based localization with discriminative features from heterogeneous visual data / Nathan Piasco (2019)
PermalinkDepth-based hand pose estimation : Methods, data, and challenges / James Steven Supančič in International journal of computer vision, vol 126 n° 11 (November 2018)
Permalink3D urban geovisualization: in situ augmented and mixed reality experiments / Alexandre Devaux in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-4 ([19/09/2018])
![]()
PermalinkImage-based synthesis for deep 3D human pose estimation / Grégory Rogez in International journal of computer vision, vol 126 n° 9 (September 2018)
PermalinkLandmark based localization in urban environment / Xiaozhi Qu in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)
PermalinkSDF-2-SDF registration for real-time 3D reconstruction from RGB-D data / Miroslava Slavcheva in International journal of computer vision, vol 126 n° 6 (June 2018)
PermalinkReal-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras / David Joseph Tan in International journal of computer vision, vol 126 n° 2-4 (April 2018)
PermalinkA survey on visual-based localization : on the benefit of heterogeneous data / Nathan Piasco in Pattern recognition, vol 74 (February 2018)
PermalinkAdéquation algorithme architecture pour la localisation basée image sur système embarqué / David Vandergucht (2018)
![]()
Permalink