Détail de l'auteur
Auteur P. Westfeld
Documents disponibles écrits par cet auteur
Ajouter le résultat dans votre panier Visionner les documents numériques Affiner la recherche Interroger des sources externes
Geometrische und stochastische Modelle zur Verarbeitung von 3D-Kameradaten am Beispiel menschlicher Bewegungsanalysen / P. Westfeld (2012)
Titre : Geometrische und stochastische Modelle zur Verarbeitung von 3D-Kameradaten am Beispiel menschlicher Bewegungsanalysen Titre original : [Modèles aléatoires et géométriques pour le traitement de données de caméra 3D en utilisant l'exemple de l'analyse de mouvements humains] Type de document : Thèse/HDR Auteurs : P. Westfeld, Auteur Editeur : Munich : Bayerische Akademie der Wissenschaften Année de publication : 2012 Collection : DGK - C Sous-collection : Dissertationen num. 687 Importance : 283 p. Format : 21 x 30 cm ISBN/ISSN/EAN : 978-3-7696-5099-0 Note générale : Bibliographie Langues : Allemand (ger) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes descripteurs IGN] chambre 3D
[Termes descripteurs IGN] compensation par faisceaux
Résumé : (Auteur) The three-dimensional documentation of the form and location of any type of object using flexible photogrammetric methods and procedures plays a key role in a wide range of technical-industrial and scientific areas of application. Potential applications include measurement tasks in the automotive, machine building and ship building sectors, the compilation of complex 3D models in the fields of architecture, archaeology and monumental preservation and motion analyses in the fields of flow measurement technology, ballistics and medicine. In the case of close-range photogrammetry a variety of optical 3D measurement systems are used. Area sensor cameras arranged in single or multi-image configurations are used besides active triangulation procedures for surface measurement (e.g. using structured light or laser scanner systems).
The use of modulation techniques enables 3D cameras based on photomix detectors or similar principles to simultaneously produce both a grey value image and a range image. Functioning as single image sensors, they deliver spatially resolved surface data at video rate without the need for stereoscopic image matching. In the case of 3D motion analyses in particular, this leads to considerable reductions in complexity and computing time. 3D cameras combine the practicality of a digital camera with the 3D data acquisition potential of conventional surface measurement systems. Despite the relatively low spatial resolution currently achievable", as a mono-sensory real-time depth image acquisition system they represent an interesting alternative in the field of 3D motion analysis.
The use of 3D cameras as measuring instruments requires the modelling of deviations from the ideal projection model, and indeed the processing of the 3D camera data generated requires the targeted adaptation, development and further development of procedures in the fields of computer graphics and photogrammetry. This Ph. D. thesis therefore focuses on the development of methods of sensor calibration and 3D motion analysis in the context of investigations into inter-human motion behaviour. As a result of its intrinsic design and measurement principle, a 3D camera simultaneously provides amplitude and range data reconstructed from a measurement signal. The simultaneous integration of all data obtained using a 3D camera into an integrated approach is a logical consequence and represents the focus of current procedural development. On the one hand, the complementary characteristics of the observations made support each other due to the creation of a functional context for the measurement channels, with is to be expected to lead to increases in accuracy and reliability. On the other, the expansion of the stochastic model to include variance component estimation ensures that the heterogeneous information pool is fully exploited.
The integrated bundle adjustment developed facilitates the definition of precise 3D camera geometry and the estimation of range-measurement-specific correction parameters required for the modelling of the linear, cyclical and latency defectives of a distance measurement made using a 3D camera.
The integrated calibration routine jointly adjusts appropriate dimensions across both information channels, and also automatically estimates optimum observation weights. The method is based on the same flexible principle used in self-calibration, does not require spatial object data and therefore foregoes the time-consuming determination of reference distances with superior accuracy. The accuracy analyses carried out confirm the correctness of the proposed functional contexts, but nevertheless exhibit weaknesses in the form of non-parameterized range-measurement-specific errors. This notwithstanding, the future expansion of the mathematical model developed is guaranteed due to its adaptivity and modular implementation. The accuracy of a new 3D point coordinate can be set at 5 mm further to calibration. In the case of depth imaging technology - which is influenced by a range of usually simultaneously occurring noise sources - this level of accuracy is very promising, especially in terms of the development of evaluation algorithms based on corrected 3D camera data.
2.5D Least Squares Tracking (LST) is an integrated spatial and temporal matching method developed within the framework of this Ph. D. thesis for the purpose of evaluating 3D camera image sequences. The algorithm is based on the least squares image matching method already established in photogrammetry, and maps small surface segments of consecutive 3D camera data sets on top of one another. The mapping rule has been adapted to the data structure of a 3D camera on the basis of a 2D affine transformation. The closed parameterization combines both grey values and range values in an integrated model. In addition to the affine parameters used to include translation and rotation effects, the scale and inclination parameters model perspective-related deviations caused by distance changes in the line of sight. A pre-processing phase sees the calibration routine developed used to correct optical and distance-related measurement specific errors in input data and measured slope distances reduced to horizontal distances. 2.5D LSI i
an integrated approach, and therefore delivers fully three-dimensional displacement vectors. In addition, the accuracy and reliability data generated by error calculation can be used as decision criteria for integration into an application-specific processing chain. Process validation showed that the integration of complementary data leads to a more accurate, reliable solution to the correspondence problem, especially in the case of difficult contrast ratios within a channel. The accuracy of scale and inclination parameters directly linked to distance correction terms improved dramatically. In addition, the expansion of the geometric model led to significant benefits, and in particular for the matching of natural, not entirely planar surface segments.
The area-based object matching and object tracking method developed functions on the basis of 3D camera data gathered without object contact. It is therefore particularly suited to 3D motion analysis tasks in which the extra effort involved in multi-ocular experimental settings and the necessity of object signalling using target marks are to be avoided. The potential of the 3D camera matching approach has been demonstrated in two application scenarios in the field of research into human behaviour. As in the case of the use of 2.5 D LST to mark and then classify hand gestures accompanying verbal communication, the implementation of 2.5D LST in the proposed procedures for the determination of interpersonal distance and body orientation within the framework of pedagogical research into conflict regulation between pairs of child-age friends facilitates the automatic, effective, objective and high-resolution (from both a temporal and spatial perspective) acquisition and evaluation of data with relevance to behaviour.
This Ph. D. thesis proposes the use of a novel 3D range imaging camera to gather data on human behaviour, and presents both a calibration tool developed for data processing purposes and a method for the contact-free determination of dense 3D motion vector fields. It therefore makes a contribution to current efforts in the field of the automated videographic documentation of bodily motion within the framework of dyadic interaction, and shows that photogrammetric methods can also deliver valuable results within the framework of motion evaluation tasks in the as-yet relatively untapped field of behavioural research.
Numéro de notice : 14622 Thématique : IMAGERIE Nature : Thèse étrangère Permalink :
14622_dgk-c-687_westfeld.pdfAdobe Acrobat PDF
Photogrammetric techniques for voxel-based flow velocity field measurement / P. Westfeld in Photogrammetric record, vol 26 n° 136 (December 2011 - February 2012)
Titre : Photogrammetric techniques for voxel-based flow velocity field measurement Type de document : Article/Communication Auteurs : P. Westfeld, Auteur ; H. Maas, Auteur Année de publication : 2011 Article en page(s) : pp 422 - 438 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes descripteurs IGN] champ de vitesse
[Termes descripteurs IGN] cube
[Termes descripteurs IGN] dimension temporelle
[Termes descripteurs IGN] dynamique des fluides
[Termes descripteurs IGN] image multicapteur
[Termes descripteurs IGN] méthode des moindres carrés
[Termes descripteurs IGN] reconstruction 3D
[Termes descripteurs IGN] tomographie
[Termes descripteurs IGN] transformation affine
[Termes descripteurs IGN] voxel
Résumé : (Auteur) The paper presents improved photogrammetric techniques for the determination of 3D flow velocity fields with the goal of optimising the performance, efficiency and flexibility of the method. This includes a multi-camera system configuration and calibration, as well as approaches for volumetric reconstruction. The result of the volumetric reconstruction process is a time-resolved, voxel-space representation of tracer particles marking a flow. Using a high-speed camera, a typical data-set consists of 1000 volumetric data-sets per second, each with 1000 x 1000 x 300 voxels. 3D tracking in the time-resolved voxel data is performed by 3D least squares tracking, determining 12 parameters of a 3D affine transformation between cuboids in successive voxel data-sets. Besides the cuboid translation, these parameters also include information on the shear tensor of each cuboid. The tomographic reconstruction methods and the cuboid tracking have been implemented and validated with both simulated and real data in an experimental set-up of a vortex ring in a water tank. Numéro de notice : A2011-495 Thématique : IMAGERIE Nature : Article En ligne : http://dx.doi.org/10.1111/j.1477-9730.2011.00656.x Permalink :
in Photogrammetric record > vol 26 n° 136 (December 2011 - February 2012) . - pp 422 - 438[article]