Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > restauration d'image
restauration d'imageSynonyme(s)Prétraitement d'imageVoir aussi |
Documents disponibles dans cette catégorie (624)


Etendre la recherche sur niveau(x) vers le bas
A robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL) / Anchal Kumawat in The Visual Computer, vol 38 n° 11 (November 2022)
![]()
[article]
Titre : A robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL) Type de document : Article/Communication Auteurs : Anchal Kumawat, Auteur ; Sucheta Panda, Auteur Année de publication : 2022 Article en page(s) : pp 3681 - 3702 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] accentuation d'image
[Termes IGN] base de données d'images
[Termes IGN] détection de contours
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtre de Wiener
[Termes IGN] Inférence floue
[Termes IGN] logique floue
[Termes IGN] méthode robuste
[Termes IGN] restauration d'image
[Termes IGN] seuillage
[Termes IGN] superposition d'imagesRésumé : (auteur) The problem of edge detection plays a crucial role in almost all research areas of image processing. If edges are detected accurately, one can detect the location of objects and the parameters such as shape and area can be measured more precisely. In order to overcome the above problem, a feature-based image registration (FBIR) method in combination with an improved version of canny with fuzzy logic is proposed for accurate detection of edges. The major contributions of the present work are summarized in three steps. In the first step, a restoration-based enhancement algorithm is proposed to get a fine image from a distorted noisy image. In the second step, two versions of input images are registered using a modified FBIR approach. In the third step, to overcome the drawback of canny edge detection algorithm, each step of the algorithm is modified. The output is then fed to a “fuzzy inference system”. The “fuzzy rule-based technique”, when applied to the problem of “edge detection”, is very “efficient” because the thickness of the edges can be controlled by simply changing “rules and output parameters”. The domain of the images under consideration is various well-known image databases such as Berkeley and USC-SIPI databases, whereas the proposed method is also suitable for other types of both indoor and outdoor images. The robustness of the proposed method is analysed, compared and evaluated with seven image assessment quality (IAQ) parameters. The performance of the proposed method is compared with some of the state-of-the-art edge detection methods in terms of the seven IAQ parameters. Numéro de notice : A2022-839 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-021-02196-1 Date de publication en ligne : 14/07/2021 En ligne : https://doi.org/10.1007/s00371-021-02196-1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102041
in The Visual Computer > vol 38 n° 11 (November 2022) . - pp 3681 - 3702[article]Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)
![]()
[article]
Titre : Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope Type de document : Article/Communication Auteurs : V.S. Martins, Auteur ; D.P. Roy, Auteur ; H. Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113203 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Afrique (géographie politique)
[Termes IGN] apprentissage profond
[Termes IGN] carte thématique
[Termes IGN] cartographie automatique
[Termes IGN] correction radiométrique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] forêt tropicale
[Termes IGN] image Landsat-OLI
[Termes IGN] image PlanetScope
[Termes IGN] incendie
[Termes IGN] précision de la classification
[Termes IGN] régression
[Termes IGN] savaneRésumé : (auteur) High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches Numéro de notice : A2022-774 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113203 Date de publication en ligne : 08/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113203 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101802
in Remote sensing of environment > vol 280 (October 2022) . - n° 113203[article]PKS: A photogrammetric key-frame selection method for visual-inertial systems built on ORB-SLAM3 / Arash Azimi in ISPRS Journal of photogrammetry and remote sensing, vol 191 (September 2022)
![]()
[article]
Titre : PKS: A photogrammetric key-frame selection method for visual-inertial systems built on ORB-SLAM3 Type de document : Article/Communication Auteurs : Arash Azimi, Auteur ; Ali Hosseininaveh Ahmadabadian, Auteur ; Fabio Remondino, Auteur Année de publication : 2022 Article en page(s) : pp 18 - 32 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] alignement
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] centrale inertielle
[Termes IGN] centre de gravité
[Termes IGN] déformation d'image
[Termes IGN] géoréférencement direct
[Termes IGN] méthode heuristique
[Termes IGN] semis de points
[Termes IGN] seuillage d'image
[Termes IGN] structure-from-motion
[Termes IGN] vision par ordinateurRésumé : (auteur) Key-frame selection methods were developed in the past years to reduce the complexity of frame processing in visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) algorithms. Key-frames help increasing algorithm's performances by sparsifying frames while maintaining its accuracy and robustness. Unlike current selection methods that rely on many heuristic thresholds to decide which key-frame should be selected, this paper proposes a photogrammetric-based key-frame selection method built upon ORB-SLAM3. The proposed algorithm, named Photogrammetric Key-frame Selection (PKS), replaces static heuristic thresholds with photogrammetric principles, ensuring algorithm’s robustness and better point cloud quality. A key-frame is chosen based on adaptive thresholds and the Equilibrium Of Center Of Gravity (ECOG) criteria as well as Inertial Measurement Unit (IMU) observations. To evaluate the proposed PKS method, the European Robotics Challenge (EuRoC) and an in-house datasets are used. Quantitative and qualitative evaluations are made by comparing trajectories, point clouds quality and completeness and Absolute Trajectory Error (ATE) in mono-inertial and stereo-inertial modes. Moreover, for the generated dense point clouds, extensive evaluations, including plane-fitting error, model deformation, model alignment error, and model density and quality, are performed. The results show that the proposed algorithm improves ORB-SLAM3 positioning accuracy by 18% in stereo-inertial mode and 20% in mono-inertial mode without the use of heuristic thresholds, as well as producing a more complete and accurate point cloud up to 50%. The open-source code of the presented method is available at https://github.com/arashazimi0032/PKS. Numéro de notice : A2022-664 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.07.003 Date de publication en ligne : 12/07/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.07.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101525
in ISPRS Journal of photogrammetry and remote sensing > vol 191 (September 2022) . - pp 18 - 32[article]Fusion of GNSS and InSAR time series using the improved STRE model: applications to the San Francisco bay area and Southern California / Huineng Yan in Journal of geodesy, vol 96 n° 7 (July 2022)
![]()
[article]
Titre : Fusion of GNSS and InSAR time series using the improved STRE model: applications to the San Francisco bay area and Southern California Type de document : Article/Communication Auteurs : Huineng Yan, Auteur ; Wujiao Dai, Auteur ; Lei Xie, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 47 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] déformation de la croute terrestre
[Termes IGN] données GNSS
[Termes IGN] faille géologique
[Termes IGN] filtrage spatiotemporel
[Termes IGN] fusion de données
[Termes IGN] image radar moirée
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] modélisation spatiale
[Termes IGN] rééchantillonnage
[Termes IGN] série temporelleRésumé : (auteur) The spatio-temporal random effects (STRE) model is a classic dynamic filtering model, which can be used to fuse GNSS and InSAR deformation data. The STRE model uses a certain time span of high spatial resolution Interferometric Synthetic Aperture Radar (InSAR) time series data to establish a spatial model and then obtain a deformation result with high spatio-temporal resolution through the state transition equation recursively in time domain. Combined with the Kalman filter, the STRE model is continuously updated and modified in time domain to obtain higher accuracy result. However, it relies heavily on the prior information and personal experience to establish an accurate spatial model. To the authors' knowledge, there are no publications which use the STRE model with multiple sets of different deformation monitoring data to verify its applicability and reliability. Here, we propose an improved STRE model to automatically establish accurate spatial model to improve the STRE model, then apply it to the fusion of GNSS and InSAR deformation data in the San Francisco Bay Area covering approximately 6000 km2 and in Southern California covering approximately 100,000 km2. Our experimental results show that the improved STRE model can achieve good fusion effects in both study areas. For internal inspection, the average error RMS values in the two regions are 0.13 mm and 0.06 mm for InSAR and 2.4 and 2.8 mm for GNSS, respectively; for Jackknife cross-validation, the average error RMS values are 6.0 and 1.3 mm for InSAR and 4.3 and 4.8 mm for GNSS in the two regions, respectively. We find that the deformation rate calculated from the fusion results is highly consistent with the existing studies, the significant difference in the deformation rate on both sides of the major faults in the two regions can be clearly seen, and the area with abnormal deformation rate corresponds well to the actual situation. These results indicate that the improved STRE model can reduce the reliance on prior information and personal experience, realize the effective fusion of GNSS and InSAR deformation data in different regions, and also has the advantages of high accuracy and strong applicability. Numéro de notice : A2022-553 Affiliation des auteurs : non IGN Thématique : IMAGERIE/POSITIONNEMENT Nature : Article nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1007/s00190-022-01636-7 Date de publication en ligne : 05/07/2022 En ligne : https://doi.org/10.1007/s00190-022-01636-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101165
in Journal of geodesy > vol 96 n° 7 (July 2022) . - n° 47[article]3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
![]()
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]Direct photogrammetry with multispectral imagery for UAV-based snow depth estimation / Kathrin Maier in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
PermalinkHybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy / Norbert Haala in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
PermalinkEstimation of uneven-aged forest stand parameters, crown closure and land use/cover using the Landsat 8 OLI satellite image / Sinan Kaptan in Geocarto international, vol 37 n° 5 ([01/03/2022])
PermalinkTraffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)
PermalinkImproving local adaptive filtering method employed in radiometric correction of analogue airborne campaigns / Lâmân Lelégard (2022)
PermalinkPreparation of the VENµS satellite data over Israel for the input into the GRASP data treatment algorithm / Maeve Blarel (2022)
PermalinkStudying informativeness of satellite image texture for sea ice state retrieval using deep learning methods / Clément Fougerouse (2022)
PermalinkSemi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)
PermalinkAccuracy assessment of RTK-GNSS equipped UAV conducted as-built surveys for construction site modelling / Sander Varbla in Survey review, Vol 53 n° 381 (November 2021)
PermalinkMulti-objective CNN-based algorithm for SAR despeckling / Sergio Vitale in IEEE Transactions on geoscience and remote sensing, vol 59 n° 11 (November 2021)
Permalink