Descripteur
Documents disponibles dans cette catégorie (67)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A minimal solution for image-based sphere estimation / Tekla Tóth in International journal of computer vision, vol 131 n° 6 (June 2023)
[article]
Titre : A minimal solution for image-based sphere estimation Type de document : Article/Communication Auteurs : Tekla Tóth, Auteur ; Levente Hajder, Auteur Année de publication : 2023 Article en page(s) : pp 1428 - 1447 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de Levenberg-Marquardt
[Termes IGN] cône
[Termes IGN] ellipse
[Termes IGN] Matlab
[Termes IGN] reconstruction d'image
[Termes IGN] représentation géométrique
[Termes IGN] sphère
[Termes IGN] sphère paramétriqueRésumé : (auteur) We propose a novel minimal solver for sphere fitting via its 2D central projection, i.e., a special ellipse. The input of the presented algorithm consists of contour points detected in a camera image. General ellipse fitting problems require five contour points. However, taking advantage of the isotropic spherical target, three points are enough to define the tangent cone parameters of the sphere. This yields the sought ellipse parameters. Similarly, the sphere center can be estimated from the cone if the radius is known. These proposed geometric methods are rapid, numerically stable, and easy to implement. Experimental results—on synthetic, photorealistic, and real images—showcase the superiority of the proposed solutions to the state-of-the-art methods. A real-world LiDAR-camera calibration application justifies the utility of the sphere-based approach resulting in an error below a few centimeters. Numéro de notice : A2023-189 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-023-01766-1 Date de publication en ligne : 02/03/2023 En ligne : https://doi.org/10.1007/s11263-023-01766-1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103061
in International journal of computer vision > vol 131 n° 6 (June 2023) . - pp 1428 - 1447[article]A hierarchical deformable deep neural network and an aerial image benchmark dataset for surface multiview stereo reconstruction / Jiayi Li in IEEE Transactions on geoscience and remote sensing, vol 61 n° 1 (January 2023)
[article]
Titre : A hierarchical deformable deep neural network and an aerial image benchmark dataset for surface multiview stereo reconstruction Type de document : Article/Communication Auteurs : Jiayi Li, Auteur ; Xin Huang, Auteur ; Yujin Feng, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5600812 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] carte de profondeur
[Termes IGN] déformation d'objet
[Termes IGN] effet de profondeur cinétique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne
[Termes IGN] jeu de données
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] reconstruction d'image
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Multiview stereo (MVS) aerial image depth estimation is a research frontier in the remote sensing field. Recent deep learning-based advances in close-range object reconstruction have suggested the great potential of this approach. Meanwhile, the deformation problem and the scale variation issue are also worthy of attention. These characteristics of aerial images limit the applicability of the current methods for aerial image depth estimation. Moreover, there are few available benchmark datasets for aerial image depth estimation. In this regard, this article describes a new benchmark dataset called the LuoJia-MVS dataset ( https://irsip.whu.edu.cn/resources/resources_en_v2.php ), as well as a new deep neural network known as the hierarchical deformable cascade MVS network (HDC-MVSNet). The LuoJia-MVS dataset contains 7972 five-view images with a spatial resolution of 10 cm, pixel-wise depths, and precise camera parameters, and was generated from an accurate digital surface model (DSM) built from thousands of stereo aerial images. In the HDC-MVSNet network, a new full-scale feature pyramid extraction module, a hierarchical set of 3-D convolutional blocks, and “true 3-D” deformable 3-D convolutional layers are specifically designed by considering the aforementioned characteristics of aerial images. Overall and ablation experiments on the WHU and LuoJia-MVS datasets validated the superiority of HDC-MVSNet over the current state-of-the-art MVS depth estimation methods and confirmed that the newly built dataset can provide an effective benchmark. Numéro de notice : A2023-117 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3234694 En ligne : https://doi.org/10.1109/TGRS.2023.3234694 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102488
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 1 (January 2023) . - n° 5600812[article]Assessment of camera focal length influence on canopy reconstruction quality / Martin Denter in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
[article]
Titre : Assessment of camera focal length influence on canopy reconstruction quality Type de document : Article/Communication Auteurs : Martin Denter, Auteur ; Julian Frey, Auteur ; Teja Kattenborn, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100025 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] Abies alba
[Termes IGN] Acer pseudoplatanus
[Termes IGN] Allemagne
[Termes IGN] canopée
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Fagus sylvatica
[Termes IGN] image captée par drone
[Termes IGN] Larix decidua
[Termes IGN] longueur focale
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] parcelle forestière
[Termes IGN] Picea abies
[Termes IGN] reconstruction d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motionRésumé : (auteur) Unoccupied aerial vehicles (UAV) with RGB-cameras are affordable and versatile devices for the generation of a series of remote sensing products that can be used for forest inventory tasks, such as creating high-resolution orthomosaics and canopy height models. The latter may serve purposes including tree species identification, forest damage assessments, canopy height or timber stock assessments. Besides flight and image acquisition parameters such as image overlap, flight height, and weather conditions, the focal length, which determines the opening angle of the camera lens, is a parameter that influences the reconstruction quality. Despite its importance, the effect of focal length on the quality of 3D reconstructions of forests has received little attention in the literature. Shorter focal lengths result in more accurate distance estimates in the nadir direction since small angular errors lead to large positional errors in narrow opening angles. In this study, 3D reconstructions of four UAV-acquisitions with different focal lengths (21, 35, 50, and 85 mm) on a 1 ha mature mixed forest plot were compared to reference point clouds derived from high quality Terrestrial Laser Scans. Shorter focal lengths (21 and 35 mm) led to a higher agreement with the TLS scans and thus better reconstruction quality, while at 50 mm, quality losses were observed, and at 85 mm, the quality was considerably worse. F1-scores calculated from a voxel representation of the point clouds amounted to 0.254 with 35 mm and 0.201 with 85 mm. The precision with 21 mm focal length was 0.466 and 0.302 with 85 mm. We thus recommend a focal length no longer than 35 mm during UAV Structure from Motion (SfM) data acquisition for forest management practices. Numéro de notice : A2022-870 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100025 Date de publication en ligne : 09/11/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102164
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 6 (December 2022) . - n° 100025[article]Single-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms / Yadong Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)
[article]
Titre : Single-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms Type de document : Article/Communication Auteurs : Yadong Li, Auteur ; Sébastien Mavromatis, Auteur ; Feng Zhang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 3000224 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image isolée
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] pouvoir de résolution spectrale
[Termes IGN] reconstruction d'image
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Super-resolution (SR) technology is an important way to improve spatial resolution under the condition of sensor hardware limitations. With the development of deep learning (DL), some DL-based SR models have achieved state-of-the-art performance, especially the convolutional neural network (CNN). However, considering that remote sensing images usually contain a variety of ground scenes and objects with different scales, orientations, and spectral characteristics, previous works usually treat important and unnecessary features equally or only apply different weights in the local receptive field, which ignores long-range dependencies; it is still a challenging task to exploit features on different levels and reconstruct images with realistic details. To address these problems, an attention-based generative adversarial network (SRAGAN) is proposed in this article, which applies both local and global attention mechanisms. Specifically, we apply local attention in the SR model to focus on structural components of the earth’s surface that require more attention, and global attention is used to capture long-range interdependencies in the channel and spatial dimensions to further refine details. To optimize the adversarial learning process, we also use local and global attentions in the discriminator model to enhance the discriminative ability and apply the gradient penalty in the form of hinge loss and loss function that combines L1 pixel loss, L1 perceptual loss, and relativistic adversarial loss to promote rich details. The experiments show that SRAGAN can achieve performance improvements and reconstruct better details compared with current state-of-the-art SR methods. A series of ablation investigations and model analyses validate the efficiency and effectiveness of our method. Numéro de notice : A2022-767 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3093043 Date de publication en ligne : 12/07/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3093043 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101789
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 10 (October 2022) . - n° 3000224[article]3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)
[article]
Titre : 3D semantic scene completion: A survey Type de document : Article/Communication Auteurs : Luis Roldão, Auteur ; Raoul de Charette, Auteur ; Anne Verroust-Blondet, Auteur Année de publication : 2022 Article en page(s) : pp 1978 - 2005 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] effet de profondeur cinétique
[Termes IGN] image RVB
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelRésumé : (auteur) Semantic scene completion (SSC) aims to jointly estimate the complete geometry and semantics of a scene, assuming partial sparse input. In the last years following the multiplication of large-scale 3D datasets, SSC has gained significant momentum in the research community because it holds unresolved challenges. Specifically, SSC lies in the ambiguous completion of large unobserved areas and the weak supervision signal of the ground truth. This led to a substantially increasing number of papers on the matter. This survey aims to identify, compare and analyze the techniques providing a critical analysis of the SSC literature on both methods and datasets. Throughout the paper, we provide an in-depth analysis of the existing works covering all choices made by the authors while highlighting the remaining avenues of research. SSC performance of the SoA on the most popular datasets is also evaluated and analyzed. Numéro de notice : A2022-593 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01504-5 Date de publication en ligne : 06/06/2022 En ligne : http://dx.doi.org/10.1007/s11263-021-01504-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101296
in International journal of computer vision > vol 130 n° 8 (August 2022) . - pp 1978 - 2005[article]A continuous change tracker model for remote sensing time series reconstruction / Yangjian Zhang in Remote sensing, vol 14 n° 9 (May-1 2022)PermalinkUncertainty estimation for stereo matching based on evidential deep learning / Chen Wang in Pattern recognition, vol 124 (April 2022)PermalinkDeep-learning-based multispectral image reconstruction from single natural color RGB image - Enhancing UAV-based phenotyping / Jiangsan Zhao in Remote sensing, vol 14 n° 5 (March-1 2022)PermalinkFusion de données hyperspectrales et panchromatiques dans le domaine réflectif / Yohann Constans (2022)PermalinkUnsupervised generative models for data analysis and explainable artificial intelligence / Mohanad Abukmeil (2022)PermalinkSemi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)PermalinkGPRInvNet: Deep learning-based ground-penetrating radar data inversion for tunnel linings / Bin Liu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 10 (October 2021)PermalinkVariational bayesian compressive multipolarization indoor radar imaging / Van Ha Tang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 9 (September 2021)PermalinkMultisensor data fusion for cloud removal in global and all-season Sentinel-2 imagery / Patrick Ebel in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkA stacked dense denoising–segmentation network for undersampled tomograms and knowledge transfer using synthetic tomograms / Dimitrios Bellos in Machine Vision and Applications, vol 32 n° 3 (May 2021)Permalink