Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > restauration d'image > correction d'image
correction d'imageSynonyme(s)rectification d'imageVoir aussi |
Documents disponibles dans cette catégorie (378)



Etendre la recherche sur niveau(x) vers le bas
Consistency assessment of multi-date PlanetScope imagery for seagrass percent cover mapping in different seagrass meadows / Pramaditya Wicaksono in Geocarto international, vol 37 n° 27 ([20/12/2022])
![]()
[article]
Titre : Consistency assessment of multi-date PlanetScope imagery for seagrass percent cover mapping in different seagrass meadows Type de document : Article/Communication Auteurs : Pramaditya Wicaksono, Auteur ; Amanda Maishella, Auteur ; Wahyu Lazuardi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 15161 - 15186 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] carte thématique
[Termes IGN] classification par arbre de décision
[Termes IGN] classification pixellaire
[Termes IGN] correction d'image
[Termes IGN] filtrage du bruit
[Termes IGN] herbier marin
[Termes IGN] image PlanetScope
[Termes IGN] IndonésieRésumé : (auteur) Seagrass percent cover is a crucial and influential component of the biophysical characteristics of seagrass beds and is a key parameter for monitoring seagrass conditions. Therefore, the availability of seagrass percent cover maps greatly assists in sustainable coastal ecosystem management. This research aimed to assess the consistency of PlanetScope imagery for seagrass percent cover mapping using two study areas, namely Parang Island and Labuan Bajo, Indonesia. Assessing the consistency of the PlanetScope imagery performance in seagrass percent cover mapping helps understand the effects of variations in the image quality on its performance in monitoring changes in seagrass cover. Percent cover maps were derived using object-based image analysis (image segmentation and random forest) and pixel-based random forest algorithm. Accuracy assessment and consistency analysis were conducted on the basis of the following three approaches: overall accuracy consistency, agreement percentage and consistent pixel locations. Results show that PlanetScope images can fairly consistently map seagrass percent cover for a specific area across different dates. However, these images produced different levels of accuracy when used for mapping in seagrass meadows with various characteristics and benthic cover complexities. The mapping accuracy (OA–overall accuracy) and consistency (AP–agreement percentage) in patchy seagrass meadows (Parang Island, mean OA 18.4%–38.6%, AP 44.1%–70.3%) are different from those in continuous seagrass meadows (Labuan Bajo, OA 43.0%–56.2%, and AP 41.8%–55.8%). Moreover, PlanetScope images are consistent when used for mapping seagrasses with low and high percent covers but strive to obtain good consistency for medium percent cover due to the combination of seagrass and non-seagrass in a pixel. Furthermore, images with relatively similar image acquisition conditions (i.e., winds, aerosol optical depth, signal-to-noise ratio, and sunglint intensity) produce better consistency. The OA is related to the image acquisition conditions, whilst the AP is related to variation in these conditions. Nevertheless, PlanetScope is still the best high spatial resolution image that provides daily acquisition and is highly beneficial for various applications in tropical areas with persistent cloud coverage. Numéro de notice : A2022-932 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2022.2096122 Date de publication en ligne : 06/07/2022 En ligne : https://doi.org/10.1080/10106049.2022.2096122 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102668
in Geocarto international > vol 37 n° 27 [20/12/2022] . - pp 15161 - 15186[article]Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)
![]()
[article]
Titre : Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope Type de document : Article/Communication Auteurs : V.S. Martins, Auteur ; D.P. Roy, Auteur ; H. Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113203 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Afrique (géographie politique)
[Termes IGN] apprentissage profond
[Termes IGN] carte thématique
[Termes IGN] cartographie automatique
[Termes IGN] correction radiométrique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] forêt tropicale
[Termes IGN] image Landsat-OLI
[Termes IGN] image PlanetScope
[Termes IGN] incendie
[Termes IGN] précision de la classification
[Termes IGN] régression
[Termes IGN] savaneRésumé : (auteur) High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches Numéro de notice : A2022-774 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113203 Date de publication en ligne : 08/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113203 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101802
in Remote sensing of environment > vol 280 (October 2022) . - n° 113203[article]3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
![]()
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]Estimation of uneven-aged forest stand parameters, crown closure and land use/cover using the Landsat 8 OLI satellite image / Sinan Kaptan in Geocarto international, vol 37 n° 5 ([01/03/2022])
![]()
[article]
Titre : Estimation of uneven-aged forest stand parameters, crown closure and land use/cover using the Landsat 8 OLI satellite image Type de document : Article/Communication Auteurs : Sinan Kaptan, Auteur ; Hasan Aksoy, Auteur Année de publication : 2022 Article en page(s) : pp 1408 - 1425 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification dirigée
[Termes IGN] correction géométrique
[Termes IGN] forêt inéquienne
[Termes IGN] houppier
[Termes IGN] image Landsat-OLI
[Termes IGN] occupation du sol
[Termes IGN] peuplement forestier
[Termes IGN] Turquie
[Termes IGN] utilisation du solRésumé : (Auteur) This study used the Landsat 8 OLI satellite image and the supervised classification method to estimate uneven-aged forest stand parameters and land use/cover. The spatial success of classification was also investigated. The overall success rates and Kappa values of the classification were, respectively, 74.7% and 0.75 for actual structural type, 84.6% and 0.80 for crown closure, and 88.35% and 0.81 for land use class, whereas the spatial success of classification on the forest cover type map was 36.91% for actual structural type, 64.74% for crown closure, and 41.78% for land use/cover class. The results revealed that the Landsat 8 OLI image can be used to identify stand parameters and land use/cover class. However, because the spatial success rates were below 50% for the actual structural type and land use/cover class of the stand types, it is not suitable for use in spatial classification determination for these classes. Numéro de notice : A2022-277 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1765888 Date de publication en ligne : 20/05/2020 En ligne : https://doi.org/10.1080/10106049.2020.1765888 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100795
in Geocarto international > vol 37 n° 5 [01/03/2022] . - pp 1408 - 1425[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 059-2022051 SL Revue Centre de documentation Revues en salle Disponible Traffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)
![]()
[article]
Titre : Traffic sign three-dimensional reconstruction based on point clouds and panoramic images Type de document : Article/Communication Auteurs : Minye Wang, Auteur ; Rufei Liu, Auteur ; Jiben Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 87 - 110 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image panoramique
[Termes IGN] lidar mobile
[Termes IGN] reconstruction 3D
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) Traffic signs are a very important source of information for drivers and pilotless automobiles. With the advance of Mobile LiDAR System (MLS), massive point clouds have been applied in three-dimensional digital city modelling. However, traffic signs in MLS point clouds are low density, colourless and incomplete. This paper presents a new method for the reconstruction of vertical rectangle traffic sign point clouds based on panoramic images. In this method, traffic sign point clouds are extracted based on arc feature and spatial semantic features analysis. Traffic signs in images are detected by colour and shape features and a convolutional neural network. Traffic sign point cloud and images are registered based on outline features. Finally, traffic sign points match traffic sign pixels to reconstruct the traffic sign point cloud. Experimental results have demonstrated that this proposed method can effectively obtain colourful and complete traffic sign point clouds with high resolution. Numéro de notice : A2022-254 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12398 Date de publication en ligne : 05/03/2022 En ligne : https://doi.org/10.1111/phor.12398 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100217
in Photogrammetric record > vol 37 n° 177 (March 2022) . - pp 87 - 110[article]Improving local adaptive filtering method employed in radiometric correction of analogue airborne campaigns / Lâmân Lelégard (2022)
PermalinkPreparation of the VENµS satellite data over Israel for the input into the GRASP data treatment algorithm / Maeve Blarel (2022)
PermalinkSemi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)
PermalinkRobust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features / Bai Zhu in ISPRS Journal of photogrammetry and remote sensing, Vol 181 (November 2021)
PermalinkThe polar epipolar rectification / François Darmon in IPOL Journal, Image Processing On Line, vol 11 (2021)
PermalinkSpectral reflectance estimation of UAS multispectral imagery using satellite cross-calibration method / Saket Gowravaram in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 10 (October 2021)
PermalinkHyperspectral image fusion and multitemporal image fusion by joint sparsity / Han Pan in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 9 (September 2021)
PermalinkPermalinkDigital surface model refinement based on projected images / Jiali Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 3 (March 2021)
PermalinkApport des méthodes : imagerie drone, LiDAR et imagerie hyperspectrale pour l’étude du littoral vendéen / Mathis Baudis (2021)
Permalink