Descripteur
Documents disponibles dans cette catégorie (8)



Etendre la recherche sur niveau(x) vers le bas
Deep image deblurring: A survey / Kaihao Zhang in International journal of computer vision, vol 130 n° 9 (September 2022)
![]()
[article]
Titre : Deep image deblurring: A survey Type de document : Article/Communication Auteurs : Kaihao Zhang, Auteur ; Wenqi Ren, Auteur ; Wenhan Luo, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2103 - 2130 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] accentuation d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déconvolution
[Termes IGN] estimation par noyau
[Termes IGN] filtrage du bruit
[Termes IGN] image floue
[Termes IGN] qualité d'image
[Termes IGN] réseau antagoniste génératif
[Termes IGN] taxinomie
[Termes IGN] vision par ordinateurRésumé : (auteur) Image deblurring is a classic problem in low-level computer vision with the aim to recover a sharp image from a blurred input image. Advances in deep learning have led to significant progress in solving this problem, and a large number of deblurring networks have been proposed. This paper presents a comprehensive and timely survey of recently published deep-learning based image deblurring approaches, aiming to serve the community as a useful literature review. We start by discussing common causes of image blur, introduce benchmark datasets and performance metrics, and summarize different problem formulations. Next, we present a taxonomy of methods using convolutional neural networks (CNN) based on architecture, loss function, and application, offering a detailed review and comparison. In addition, we discuss some domain-specific deblurring applications including face images, text, and stereo image pairs. We conclude by discussing key challenges and future research directions. Numéro de notice : A2022-638 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-022-01633-5 Date de publication en ligne : 25/06/2022 En ligne : https://doi.org/10.1007/s11263-022-01633-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101444
in International journal of computer vision > vol 130 n° 9 (September 2022) . - pp 2103 - 2130[article]HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion / Kun Li in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)
![]()
[article]
Titre : HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion Type de document : Article/Communication Auteurs : Kun Li, Auteur ; Wei Zhang, Auteur ; Dian Yu, Auteur ; Xin Tian, Auteur Année de publication : 2022 Article en page(s) : pp 30 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image floue
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal profondRésumé : (Auteur) Traditional approaches mainly fuse a hyperspectral image (HSI) with a high-resolution multispectral image (MSI) to improve the spatial resolution of the HSI. However, such improvement in the spatial resolution of HSIs is still limited because the spatial resolution of MSIs remains low. To further improve the spatial resolution of HSIs, we propose HyperNet, a deep network for the fusion of HSI, MSI, and panchromatic image (PAN), which effectively injects the spatial details of an MSI and a PAN into an HSI while preserving the spectral information of the HSI. Thus, we design HyperNet on the basis of a uniform fusion strategy to solve the problem of complex fusion of three types of sources (i.e., HSI, MSI, and PAN). In particular, the spatial details of the MSI and the PAN are extracted by multiple specially designed multiscale-attention-enhance blocks in which multi-scale convolution is used to adaptively extract features from different reception fields, and two attention mechanisms are adopted to enhance the representation capability of features along the spectral and spatial dimensions, respectively. Through the capability of feature reuse and interaction in a specially designed dense-detail-insertion block, the previously extracted features are subsequently injected into the HSI according to the unidirectional feature propagation among the layers of dense connection. Finally, we construct an efficient loss function by integrating the multi-scale structural similarity index with the norm, which drives HyperNet to generate high-quality results with a good balance between spatial and spectral qualities. Extensive experiments on simulated and real data sets qualitatively and quantitatively demonstrate the superiority of HyperNet over other state-of-the-art methods. Numéro de notice : A2022-272 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.04.001 Date de publication en ligne : 07/04/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.04.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100461
in ISPRS Journal of photogrammetry and remote sensing > vol 188 (June 2022) . - pp 30 - 44[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022061 SL Revue Centre de documentation Revues en salle Disponible 081-2022063 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022062 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt
Titre : Event-driven feature detection and tracking for visual SLAM Type de document : Thèse/HDR Auteurs : Ignacio Alzugaray, Auteur Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2022 Note générale : bibliographie
thesis submitted to attain the degree of Doctor of Sciences of ETH ZurichLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] caméra d'événement
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image floue
[Termes IGN] reconnaissance de formes
[Termes IGN] séquence d'images
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Traditional frame-based cameras have become the de facto sensor of choice for a multitude of applications employing Computer Vision due to their compactness, low cost, ubiquity, and ability to provide information-rich exteroceptive measurements. Despite their dominance in the field, these sensors exhibit limitations in common, real-world scenarios where detrimental effects, such as motion blur during high-speed motion or over-/underexposure in scenes with poor illumination, are prevalent. Challenging the dominance of traditional cameras, the recent emergence of bioinspired event cameras has opened up exciting research possibilities for robust perception due to their high-speed sensing, High-Dynamic-Range capabilities, and low power consumption. Despite their promising characteristics, event cameras present numerous challenges due to their unique output: a sparse and asynchronous stream of events, only capturing incremental perceptual changes at individual pixels. This radically different sensing modality renders most of the traditional Computer Vision algorithms incompatible without substantial prior adaptation, as they are initially devised for processing sequences of images captured at fixed frame-rate. Consequently, the bulk of existing event-based algorithms in the literature have opted to discretize the event stream into batches and process them sequentially, effectively reverting to frame-like representations in an attempt to mimic the processing of image sequences from traditional sensors. Such event-batching algorithms have demonstrably outperformed other alternative frame-based algorithms in scenarios where the quality of conventional intensity images is severely compromised, unveiling the inherent potential of these new sensors and popularizing them. To date, however, many newly designed event-based algorithms still rely on a contrived discretization of the event stream for its processing, suggesting that the full potential of event cameras is yet to be harnessed by processing their output more naturally. This dissertation departs from the mere adaptation of traditional frame-based approaches and advocates instead for the development of new algorithms integrally designed for event cameras to fully exploit their advantageous characteristics. In particular, the focus of this thesis lies on describing a series of novel strategies and algorithms that operate in a purely event-driven fashion, \ie processing each event as soon as it gets generated without any intermediate buffering of events into arbitrary batches and thus avoiding any additional latency in their processing. Such event-driven processes present additional challenges compared to their simpler event-batching counterparts, which, in turn, can largely be attributed to the requirement to produce reliable results at event-rate, entailing significant practical implications for their deployment in real-world applications. The body of this thesis addresses the design of event-driven algorithms for efficient and asynchronous feature detection and tracking with event cameras, covering alongside crucial elements on pattern recognition and data association for this emerging sensing modality. In particular, a significant portion of this thesis is devoted to the study of visual corners for event cameras, leading to the design of innovative event-driven approaches for their detection and tracking as corner-events. Moreover, the presented research also investigates the use of generic patch-based features and their event-driven tracking for the efficient retrieval of high-quality feature tracks. All the developed algorithms in this thesis serve as crucial stepping stones towards a completely event-driven, feature-based Simultaneous Localization And Mapping (SLAM) pipeline. This dissertation extends upon established concepts from state-of-the-art, event-driven methods and further explores the limits of the event-driven paradigm in realistic monocular setups. While the presented approaches solely rely on event-data, the gained insights are seminal to future investigations targeting the combination of event-based vision with other, complementary sensing modalities. The research conducted here paves the way towards a new family of event-driven algorithms that operate efficiently, robustly, and in a scalable manner, envisioning a potential paradigm shift in event-based Computer Vision. Note de contenu : 1- Introduction
2- Contribution
3- Conclusion and outlookNuméro de notice : 28699 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD Thesis : Sciences : ETH Zurich : 2022 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/541700 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100470 Correction du flou de mouvement sur des images prises de nuit depuis un véhicule de numérisation terrestre / Vincent Daval in Revue Française de Photogrammétrie et de Télédétection, n° 215 (mai - août 2017)
![]()
[article]
Titre : Correction du flou de mouvement sur des images prises de nuit depuis un véhicule de numérisation terrestre Type de document : Article/Communication Auteurs : Vincent Daval, Auteur ; Lâmân Lelégard , Auteur ; Mathieu Brédif
, Auteur
Année de publication : 2017 Article en page(s) : pp 53 - 64 Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] caméra numérique
[Termes IGN] correction automatique
[Termes IGN] correction d'image
[Termes IGN] données lidar
[Termes IGN] étalement d'histogramme
[Termes IGN] image floue
[Termes IGN] lever mobile
[Termes IGN] prise de vue terrestre
[Termes IGN] reconstruction d'image
[Termes IGN] système de numérisation mobileRésumé : (auteur) Ce travail marque une première étape dans la définition d’une méthode de correction du flou de mouvement observé dans les clichés pris avec un long temps d’exposition par un véhicule de cartographie mobile. Dans l’approche proposée, nous prenons en considération à la fois les données inertielles provenant d’accéléromètres et de gyroscopes et les données de variation de la profondeur de la scène fournies par des mesures Lidar ou un modèle 3D. Notre algorithme utilise toutes les données utiles afin de déterminer au mieux la fonction d’étalement du point en chaque pixel. Nous proposons également un premier essai de correction du flou en utilisant les noyaux de flou non uniforme et spatialement variant que nous avons obtenus en suivant une approche de reconstruction spatiale. Notre méthode est actuellement validée sur des prises de vue floues non bruitées obtenues par images de synthèse qui reproduisent le mouvement réel du véhicule. Nous précisons enfin comment il est envisagé d’obtenir une correction de l’image complète et d’améliorer encore ces premiers travaux. Numéro de notice : A2017-527 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : 10.52638/rfpt.2017.354 Date de publication en ligne : 16/08/2017 En ligne : https://doi.org/10.52638/rfpt.2017.354 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86551
in Revue Française de Photogrammétrie et de Télédétection > n° 215 (mai - août 2017) . - pp 53 - 64[article]Réservation
Réserver ce documentExemplaires (2)
Code-barres Cote Support Localisation Section Disponibilité 018-2017021 SL Revue Centre de documentation Revues en salle Disponible 018-2017022 SL Revue Centre de documentation Revues en salle Disponible Monitoring 3D vibrations in structures using high-resolution blurred imagery / David M.J. McCarhy in Photogrammetric record, vol 31 n° 155 (September - November 2016)
![]()
[article]
Titre : Monitoring 3D vibrations in structures using high-resolution blurred imagery Type de document : Article/Communication Auteurs : David M.J. McCarhy, Auteur ; Jim H. Chandler, Auteur ; Alessandro Palmeri, Auteur Année de publication : 2016 Article en page(s) : pp 304 - 324 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie terrestre
[Termes IGN] déformation d'édifice
[Termes IGN] données localisées 3D
[Termes IGN] image floue
[Termes IGN] résistance des matériaux
[Termes IGN] surveillance d'ouvrage
[Termes IGN] test statistique
[Termes IGN] vibrationRésumé : (Auteur) Photogrammetry has been used in the past to monitor the laboratory testing of civil engineering structures using multiple image-based sensors. This has been successful, but detecting vibrations during dynamic structural tests has proved more challenging because they usually depend on high-speed cameras which often results in lower image resolutions and reduced accuracy. To overcome this limitation, a novel approach has been devised to take measurements from blurred images in long-exposure photographs. The motion of the structure is captured in individual motion-blurred images without dependence on imaging speed. A bespoke algorithm then determines each measurement point's motion. Using photogrammetric techniques, a model structure's motion with respect to different excitation frequencies is captured and its vibration envelope recreated in 3D. The approach is tested and used to identify changes in the model's vibration response. Numéro de notice : A2016-723 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12156 Date de publication en ligne : 18/09/2016 En ligne : https://doi.org/10.1111/phor.12156 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=82251
in Photogrammetric record > vol 31 n° 155 (September - November 2016) . - pp 304 - 324[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 106-2016031 RAB Revue Centre de documentation En réserve 3L Disponible Detecting and correcting motion blur from images shot with channel-dependent exposure time / Lâmân Lelégard in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, I-3 (2012)
PermalinkCorrection du flou de mouvement sur les images prises de nuit par le STEREOPOLIS / Vincent Daval (2012)
PermalinkRéalisation d'un filtre adaptatif d'images couleur avec critère psychovisuel de qualité / Fabrice Clara (1980)
Permalink