Descripteur
Documents disponibles dans cette catégorie (739)


Etendre la recherche sur niveau(x) vers le bas
3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
![]()
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]Calibration radiométrique et géométrique d'une caméra fish-eye pour la mesure de l'hémisphère de luminance incidente / Manchun Lei (2022)
Titre : Calibration radiométrique et géométrique d'une caméra fish-eye pour la mesure de l'hémisphère de luminance incidente Type de document : Rapport Auteurs : Manchun Lei ; Christophe Meynard
, Auteur ; Jean-Michaël Muller
, Auteur ; Christian Thom
, Auteur
Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN Année de publication : 2022 Importance : 34 p. Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] caméra numérique
[Termes IGN] étalonnage géométrique
[Termes IGN] étalonnage radiométrique
[Termes IGN] image hémisphérique
[Termes IGN] MicMacRésumé : (auteur) [introduction] [...] Nous avons développé un imageur hémisphérique de luminance incidente basé sur une caméra légère fish-eye. La particularité de cet imageur est sa légèreté et donc sa mobilité, ce qui est pratique pour mesurer l’hémisphère de luminance incidente des différentes surfaces (sol et façade) dans l’environnement. Ce rapport a pour objectif de documenter la procédure de calibration radiométrique et géométrique réalisée, depuis la description théorique jusqu’aux considérations pratiques. La phase de validation est aussi présentée. Note de contenu : 1 Introduction
2 Description des instruments
3 Calibration de bruit d'obscurité
4 Calibration radiométrique
5 Calibration géométriqueNuméro de notice : 17734 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE Nature : Rapport DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100625
Titre : Event-driven feature detection and tracking for visual SLAM Type de document : Thèse/HDR Auteurs : Ignacio Alzugaray, Auteur Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2022 Note générale : bibliographie
thesis submitted to attain the degree of Doctor of Sciences of ETH ZurichLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] caméra d'événement
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image floue
[Termes IGN] reconnaissance de formes
[Termes IGN] séquence d'images
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Traditional frame-based cameras have become the de facto sensor of choice for a multitude of applications employing Computer Vision due to their compactness, low cost, ubiquity, and ability to provide information-rich exteroceptive measurements. Despite their dominance in the field, these sensors exhibit limitations in common, real-world scenarios where detrimental effects, such as motion blur during high-speed motion or over-/underexposure in scenes with poor illumination, are prevalent. Challenging the dominance of traditional cameras, the recent emergence of bioinspired event cameras has opened up exciting research possibilities for robust perception due to their high-speed sensing, High-Dynamic-Range capabilities, and low power consumption. Despite their promising characteristics, event cameras present numerous challenges due to their unique output: a sparse and asynchronous stream of events, only capturing incremental perceptual changes at individual pixels. This radically different sensing modality renders most of the traditional Computer Vision algorithms incompatible without substantial prior adaptation, as they are initially devised for processing sequences of images captured at fixed frame-rate. Consequently, the bulk of existing event-based algorithms in the literature have opted to discretize the event stream into batches and process them sequentially, effectively reverting to frame-like representations in an attempt to mimic the processing of image sequences from traditional sensors. Such event-batching algorithms have demonstrably outperformed other alternative frame-based algorithms in scenarios where the quality of conventional intensity images is severely compromised, unveiling the inherent potential of these new sensors and popularizing them. To date, however, many newly designed event-based algorithms still rely on a contrived discretization of the event stream for its processing, suggesting that the full potential of event cameras is yet to be harnessed by processing their output more naturally. This dissertation departs from the mere adaptation of traditional frame-based approaches and advocates instead for the development of new algorithms integrally designed for event cameras to fully exploit their advantageous characteristics. In particular, the focus of this thesis lies on describing a series of novel strategies and algorithms that operate in a purely event-driven fashion, \ie processing each event as soon as it gets generated without any intermediate buffering of events into arbitrary batches and thus avoiding any additional latency in their processing. Such event-driven processes present additional challenges compared to their simpler event-batching counterparts, which, in turn, can largely be attributed to the requirement to produce reliable results at event-rate, entailing significant practical implications for their deployment in real-world applications. The body of this thesis addresses the design of event-driven algorithms for efficient and asynchronous feature detection and tracking with event cameras, covering alongside crucial elements on pattern recognition and data association for this emerging sensing modality. In particular, a significant portion of this thesis is devoted to the study of visual corners for event cameras, leading to the design of innovative event-driven approaches for their detection and tracking as corner-events. Moreover, the presented research also investigates the use of generic patch-based features and their event-driven tracking for the efficient retrieval of high-quality feature tracks. All the developed algorithms in this thesis serve as crucial stepping stones towards a completely event-driven, feature-based Simultaneous Localization And Mapping (SLAM) pipeline. This dissertation extends upon established concepts from state-of-the-art, event-driven methods and further explores the limits of the event-driven paradigm in realistic monocular setups. While the presented approaches solely rely on event-data, the gained insights are seminal to future investigations targeting the combination of event-based vision with other, complementary sensing modalities. The research conducted here paves the way towards a new family of event-driven algorithms that operate efficiently, robustly, and in a scalable manner, envisioning a potential paradigm shift in event-based Computer Vision. Note de contenu : 1- Introduction
2- Contribution
3- Conclusion and outlookNuméro de notice : 28699 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD Thesis : Sciences : ETH Zurich : 2022 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/541700 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100470 Semi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)
![]()
[article]
Titre : Semi-automatic reconstruction of object lines using a smartphone’s dual camera Type de document : Article/Communication Auteurs : Mohammed Aldelgawy, Auteur ; Isam Abu-Qasmieh, Auteur Année de publication : 2021 Article en page(s) : pp 381 - 401 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] acquisition d'images
[Termes IGN] appariement d'images
[Termes IGN] chambre non métrique
[Termes IGN] correction d'image
[Termes IGN] étalonnage
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] forme linéaire
[Termes IGN] intersection spatiale
[Termes IGN] objectif grand angulaire
[Termes IGN] reconstruction d'image
[Termes IGN] téléphone intelligent
[Termes IGN] transformation de HoughRésumé : (Auteur) In this paper, the possibility of reconstructing object lines using a smartphone’s rear dual camera (wide-angle and telephoto) was examined through designing a semi-automatic system. After calibrating both cameras, six scenes for each of three objects were captured and rectified. Object lines were categorised into six groups based on the distance and angle to the dual camera system. Image lines were extracted using the linear Hough transform technique and points of intersection detected. Stereo pairing of conjugate points then allowed the calculation of object coordinates and the lengths of object lines were compared to their lengths measured by a digital caliper. The best line reconstruction results were achieved with the smallest distance and angle to the dual camera system. Numéro de notice : A2021-915 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/phor.12388 Date de publication en ligne : 19/10/2021 En ligne : https://doi.org/10.1111/phor.12388 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99330
in Photogrammetric record > Vol 36 n° 176 (December 2021) . - pp 381 - 401[article]Determining optimal photogrammetric adjustment of images obtained from a fixed-wing UAV / Karolina Pargiela in Photogrammetric record, Vol 36 n° 175 (September 2021)
![]()
[article]
Titre : Determining optimal photogrammetric adjustment of images obtained from a fixed-wing UAV Type de document : Article/Communication Auteurs : Karolina Pargiela, Auteur ; Antoni Rzonca, Auteur Année de publication : 2021 Article en page(s) : pp 285 - 302 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] acquisition d'images
[Termes IGN] chevauchement
[Termes IGN] compensation par faisceaux
[Termes IGN] image captée par drone
[Termes IGN] obturateur
[Termes IGN] point d'appui
[Termes IGN] Pologne
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] structure-from-motion
[Termes IGN] système de numérisation mobileRésumé : (auteur) Photogrammetry with unmanned aerial vehicles (UAVs) has become a source of data with extensive applications. The accuracy is of utmost significance, yet the intention is also to find the best possible solutions for data acquisition in economic terms. The objective of the research was the analysis of various variants of the bundle block adjustment. The analysis concerns data which is diversified with respect to the type of shutter (rolling/global), the measurement of external orientation elements, the overlap and the number of ground control points (GCPs). Numéro de notice : A2021-691 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12377 Date de publication en ligne : 07/08/2021 En ligne : https://doi.org/10.1111/phor.12377 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98487
in Photogrammetric record > Vol 36 n° 175 (September 2021) . - pp 285 - 302[article]Background segmentation in multicolored illumination environments / Nikolas Ladas in The Visual Computer, vol 37 n° 8 (August 2021)
PermalinkImplementation of close range photogrammetry using modern non-metric digital cameras for architectural documentation / Mariem A. Elhalawani in Geodesy and cartography, vol 47 n° 1 (January 2021)
PermalinkUsing automated vegetation cover estimation from close-range photogrammetric point clouds to compare vegetation location properties in mountain terrain / R. Niederheiser in GIScience and remote sensing, vol 58 n° 1 (February 2021)
PermalinkPermalinkPermalinkModel based signal processing techniques for nonconventional optical imaging systems / Daniele Picone (2021)
PermalinkPermalinkVers un protocole de calibration de caméras statiques à l'aide d'un drone / Jean-François Villeforceix (2021)
PermalinkTowards online UAS‐based photogrammetric measurements for 3D metrology inspection / Fabio Menna in Photogrammetric record, vol 35 n° 172 (December 2020)
PermalinkStructure from motion for complex image sets / Mario Michelini in ISPRS Journal of photogrammetry and remote sensing, vol 166 (August 2020)
Permalink