Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2121)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Joint inpainting of depth and reflectance with visibility estimation / Marco Bevilacqua in ISPRS Journal of photogrammetry and remote sensing, vol 125 (March 2017)
[article]
Titre : Joint inpainting of depth and reflectance with visibility estimation Type de document : Article/Communication Auteurs : Marco Bevilacqua, Auteur ; Jean-François Aujol, Auteur ; Pierre Biasutti , Auteur ; Mathieu Brédif , Auteur ; Aurélie Bugeau, Auteur Année de publication : 2017 Projets : 1-Pas de projet / Article en page(s) : pp 16 - 32 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] carte de profondeur
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image en couleur
[Termes IGN] inpainting
[Termes IGN] point caché
[Termes IGN] réflectance
[Termes IGN] semis de points
[Termes IGN] visibilitéRésumé : (Auteur) This paper presents a novel strategy to generate, from 3-D lidar measures, dense depth and reflectance images coherent with given color images. It also estimates for each pixel of the input images a visibility attribute. 3-D lidar measures carry multiple information, e.g. relative distances to the sensor (from which we can compute depths) and reflectances. When projecting a lidar point cloud onto a reference image plane, we generally obtain sparse images, due to undersampling. Moreover, lidar and image sensor positions typically differ during acquisition; therefore points belonging to objects that are hidden from the image view point might appear in the lidar images. The proposed algorithm estimates the complete depth and reflectance images, while concurrently excluding those hidden points. It consists in solving a joint (depth and reflectance) variational image inpainting problem, with an extra variable to concurrently estimate handling the selection of visible points. As regularizers, two coupled total variation terms are included to match, two by two, the depth, reflectance, and color image gradients. We compare our algorithm with other image-guided depth upsampling methods, and show that, when dealing with real data, it produces better inpainted images, by solving the visibility issue. Numéro de notice : A2017-073 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.01.005 Date de publication en ligne : 17/01/2017 En ligne : http://dx.doi.org/10.1016/j.isprsjprs.2017.01.005 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84310
in ISPRS Journal of photogrammetry and remote sensing > vol 125 (March 2017) . - pp 16 - 32[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2017031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2017033 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2017032 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Modified residual method for the estimation of noise in hyperspectral images / Asad Mahmood in IEEE Transactions on geoscience and remote sensing, vol 55 n° 3 (March 2017)
[article]
Titre : Modified residual method for the estimation of noise in hyperspectral images Type de document : Article/Communication Auteurs : Asad Mahmood, Auteur ; Amandine Robin, Auteur ; Michael Sears, Auteur Année de publication : 2017 Article en page(s) : pp 1451 - 1460 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] bande spectrale
[Termes IGN] bruit blanc
[Termes IGN] corrélation
[Termes IGN] corrélation automatique de points homologues
[Termes IGN] filtrage du bruit
[Termes IGN] image hyperspectrale
[Termes IGN] matrice de covariance
[Termes IGN] résiduRésumé : (Auteur) Many hyperspectral image processing algorithms (e.g., detection, classification, endmember extraction, and so on) are generally designed with the assumption of no spectral or spatial correlation in noise. However, previous studies have shown the presence of nonnegligible correlation between the noise samples in different spectral bands, especially between noises in adjacent bands, and that most of the well-known intrinsic dimension estimation algorithms give poor estimates in the presence of correlated noise. Thus, there is a need to tackle the specific case of spectrally correlated noise for noise estimation. We show, in this paper, that the commonly employed hyperspectral noise estimation algorithm based on regression residuals can be significantly affected by spectrally correlated noise and we suggest a modified approach that proves to be robust to noise correlation. Furthermore, the proposed method improves the noise variance estimates in comparison to the classic residual method even for the case of uncorrelated noise. Simulation results show that the estimation error is reduced at times by a factor of 5 when there is high spectral correlation in the noise. Our proposed per-pixel noise estimator requires an estimate of the noise covariance matrix, and for this, we also propose a method to estimate the noise covariance matrix. Simulation results demonstrate that the per-pixel noise estimates obtained via the use of estimated noise statistics are almost as good as those obtained via use of the true statistics. Numéro de notice : A2017-155 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2624505 En ligne : http://dx.doi.org/10.1109/TGRS.2016.2624505 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84690
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 3 (March 2017) . - pp 1451 - 1460[article]Refining geometry from depth sensors using IR shading images / Gyeongmin Choe in International journal of computer vision, vol 122 n° 1 (March 2017)
[article]
Titre : Refining geometry from depth sensors using IR shading images Type de document : Article/Communication Auteurs : Gyeongmin Choe, Auteur ; Jaesik Park, Auteur ; Yu-Wing Tai, Auteur ; In So Kweon, Auteur Année de publication : 2017 Article en page(s) : pp 1 – 16 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] albedo
[Termes IGN] bande infrarouge
[Termes IGN] décomposition d'image
[Termes IGN] géométrie affine
[Termes IGN] image optique
[Termes IGN] Kinect
[Termes IGN] maille triangulaire
[Termes IGN] ombreRésumé : (auteur) We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements. Numéro de notice : A2017-174 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007%2Fs11263-016-0937-y En ligne : https://doi.org/10.1007/s11263-016-0937-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=85921
in International journal of computer vision > vol 122 n° 1 (March 2017) . - pp 1 – 16[article]Robust sparse hyperspectral unmixing with ℓ2,1 norm / Yong Ma in IEEE Transactions on geoscience and remote sensing, vol 55 n° 3 (March 2017)
[article]
Titre : Robust sparse hyperspectral unmixing with ℓ2,1 norm Type de document : Article/Communication Auteurs : Yong Ma, Auteur ; Chang Li, Auteur ; Xiaoguang Mei, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 1227 - 1239 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse linéaire des mélanges spectraux
[Termes IGN] image hyperspectrale
[Termes IGN] matrice creuse
[Termes IGN] méthode robuste
[Termes IGN] pondérationRésumé : (Auteur) Sparse unmixing (SU) of hyperspectral data have recently received particular attention for analyzing remote sensing images, which aims at finding the optimal subset of signatures to best model the mixed pixel in the scene. However, most SU methods are based on the commonly admitted linear mixing model, which ignores the possible nonlinear effects (i.e., nonlinearity), and the nonlinearity is merely treated as outlier. Besides, the traditional SU algorithms often adopt the ℓ2 norm loss function, which makes them sensitive to noises and outliers. In this paper, we propose a robust SU (RSU) method with ℓ2,1 norm loss function, which is robust for noises and outliers. Then, the RSU can be solved by the alternative direction method of multipliers. Finally, the experiments on both synthetic data sets and real hyperspectral images demonstrate that the proposed RSU is efficient for solving the hyperspectral SU problem compared with the state-of-the-art algorithms. Numéro de notice : A2017-150 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2616161 En ligne : http://dx.doi.org/10.1109/TGRS.2016.2616161 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84681
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 3 (March 2017) . - pp 1227 - 1239[article]Spatial-spectral unsupervised convolutional sparse auto-encoder classifier for hyperspectral imagery / Xiaobing Han in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 3 (March 2017)
[article]
Titre : Spatial-spectral unsupervised convolutional sparse auto-encoder classifier for hyperspectral imagery Type de document : Article/Communication Auteurs : Xiaobing Han, Auteur ; Yanfei Zhong, Auteur ; Liangpei Zhang, Auteur Année de publication : 2017 Article en page(s) : pp 195 - 206 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classificateur non paramétrique
[Termes IGN] cohérence (physique)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectraleRésumé : (Auteur) The traditional spatial-spectral classification methods applied to hyperspectral remote sensing imagery are conducted by combining the spatial information vector and the spectral information vector in a separate manner, which may cause information loss and concatenation deficiency between the spatial and spectral information. In addition, the traditional morphological-based spatial-spectral classification methods require the design of handcrafted features according to experience, which is far from automatic and lacks generalization ability. To automatically represent the spatial-spectral features around the central pixel within a spatial neighborhood window, a novel spatial-spectral feature classification method based on the unsupervised convolutional sparse auto-encoder (UCSAE) with a window-in-window strategy is proposed in this study. The UCSAE algorithm features a unique spatial-spectral feature extraction approach which is executed in two stages. The first stage represents the spatial-spectral features within a spatial neighborhood window on the basis of spatial-spectral feature extraction of sub-windows with a sparse auto-encoder (SAE). The second stage exploits the spatial-spectral feature representation with a convolution mechanism for the larger outer windows. The UCSAE algorithm was validated by two widely used hyperspectral imagery datasets (the Pavia University dataset and the Washington DC Mall dataset) obtaining accuracies of 90.03 percent and 96.88 percent, respectively, which are better results than those obtained by the traditional hyperspectral spatial-spectral classification approaches. Numéro de notice : A2017-088 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.14358/PERS.83.3.195 En ligne : https://doi.org/10.14358/PERS.83.3.195 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84423
in Photogrammetric Engineering & Remote Sensing, PERS > vol 83 n° 3 (March 2017) . - pp 195 - 206[article]Unsupervised object-based differencing for land-cover change detection / Jinxia Zhu in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 3 (March 2017)PermalinkAdaptive spectral–spatial compression of hyperspectral image with sparse representation / Wei Fu in IEEE Transactions on geoscience and remote sensing, vol 55 n° 2 (February 2017)PermalinkAgricultural cropland mapping using black-and-white aerial photography, Object-Based Image Analysis and Random Forests / M.F.A. Vogels in International journal of applied Earth observation and geoinformation, vol 54 (February 2017)PermalinkCharacterizing vegetation canopy structure using airborne remote sensing data / Debsunder Dutta in IEEE Transactions on geoscience and remote sensing, vol 55 n° 2 (February 2017)PermalinkIntegrating elevation data and multispectral high-resolution images for an improved hybrid Land Use/Land Cover mapping / Mirco Sturari in European journal of remote sensing, vol 50 n° 1 (2017)PermalinkJoint sparse representation and multitask learning for hyperspectral target detection / Yuxiang Zhang in IEEE Transactions on geoscience and remote sensing, vol 55 n° 2 (February 2017)PermalinkMulti-objective based spectral unmixing for hyperspectral images / Xia Xu in ISPRS Journal of photogrammetry and remote sensing, vol 124 (February 2017)PermalinkA network-based enhanced spectral diversity approach for TOPS time-series analysis / Heresh Fattahi in IEEE Transactions on geoscience and remote sensing, vol 55 n° 2 (February 2017)PermalinkObject-based water body extraction model using Sentinel-2 satellite imagery / Gordana Kaplan in European journal of remote sensing, vol 50 n° 1 (2017)PermalinkOn the fusion of lidar and aerial color imagery to detect urban vegetation and buildings / Madhurima Bandyopadhyay in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 2 (February 2017)Permalink