Descripteur
Documents disponibles dans cette catégorie (1689)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Hybrid image noise reduction algorithm based on genetic ant colony and PCNN / Chong Shen in The Visual Computer, vol 33 n° 11 (November 2017)
[article]
Titre : Hybrid image noise reduction algorithm based on genetic ant colony and PCNN Type de document : Article/Communication Auteurs : Chong Shen, Auteur ; Ding Wang, Auteur ; Shuming Tang, Auteur ; Huiliang Cao, Auteur ; Jun Liu, Auteur Année de publication : 2017 Article en page(s) : pp 1373 - 1384 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] algorithme génétique
[Termes IGN] filtrage du bruit
[Termes IGN] optimisation par colonie de fourmis
[Termes IGN] réseau neuronal artificielRésumé : (Auteur) Pulse Coupled Neural Network (PCNN) has gained widespread attention as a nonlinear filtering technology in reducing the noise while keeping the details of images well, but how to determine the proper parameters for PCNN is a big challenge. In this paper, a method that can optimize the parameters of PCNN by combining the genetic algorithm (GA) and ant colony algorithm is proposed, which named as GACA, and the optimized procedure is named as GACA-PCNN. Firstly, the noisy image is filtered by median filter in the proposed GACA-PCNN method; then, the noisy image is filtered by GACA-PCNN constantly and the median filtering image is used as a reference image; finally, a set of parameters of PCNN can be automatically estimated by GACA, and the pretty effective denoising image will be obtained. Experimental results indicate that GACA-PCNN has a better performance on PSNR (peak signal noise rate) and a stronger capacity of preserving the details than previous denoising techniques. Numéro de notice : A2017-712 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-016-1325-x En ligne : https://doi.org/10.1007/s00371-016-1325-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88093
in The Visual Computer > vol 33 n° 11 (November 2017) . - pp 1373 - 1384[article]Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines / Jinshan Cao in ISPRS Journal of photogrammetry and remote sensing, vol 133 (November 2017)
[article]
Titre : Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines Type de document : Article/Communication Auteurs : Jinshan Cao, Auteur ; Jianhong Fu, Auteur ; Xiuxiao Yuan, Auteur ; Jianya Gong, Auteur Année de publication : 2017 Article en page(s) : pp 174 - 185 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compensation non linéaire
[Termes IGN] correction géométrique
[Termes IGN] erreur systématique
[Termes IGN] image ZiYuan-3
[Termes IGN] modèle par fonctions rationnelles
[Termes IGN] orientation du capteur
[Termes IGN] point d'appui
[Termes IGN] résidu
[Termes IGN] spline cubique
[Termes IGN] transformation affineRésumé : (Auteur) Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1.093 pixels, which were respectively 42.1%, 48.3%, and 54.8% better than those achieved before the nonlinear bias compensation. Numéro de notice : A2017-725 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.10.007 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.10.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88410
in ISPRS Journal of photogrammetry and remote sensing > vol 133 (November 2017) . - pp 174 - 185[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2017111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2017112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt 081-2017113 DEP-EXM Revue Saint-Mandé Dépôt en unité Exclu du prêt Salient object detection in complex scenes via D-S evidence theory based region classification / Chunlei Yang in The Visual Computer, vol 33 n° 11 (November 2017)
[article]
Titre : Salient object detection in complex scenes via D-S evidence theory based region classification Type de document : Article/Communication Auteurs : Chunlei Yang, Auteur ; Jiexin Pu, Auteur ; Yongsheng Dong, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 1415 - 1428 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] fusion de données
[Termes IGN] information complexe
[Termes IGN] scène intérieure
[Termes IGN] segmentation d'image
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] zone saillante 3DRésumé : (Auteur) In complex scenes, multiple objects are often concealed in cluttered backgrounds. Their saliency is difficult to be detected by using conventional methods, mainly because single color contrast can not shoulder the mission of saliency measure; other image features should be involved in saliency detection to obtain more accurate results. Using Dempster-Shafer (D-S) evidence theory based region classification, a novel method is presented in this paper. In the proposed framework, depth feature information extracted from a coarse map is employed to generate initial feature evidences which indicate the probabilities of regions belonging to foreground or background. Based on the D-S evidence theory, both uncertainty and imprecision are modeled, and the conflicts between different feature evidences are properly resolved. Moreover, the method can automatically determine the mass functions of the two-stage evidence fusion for region classification. According to the classification result and region relevance, a more precise saliency map can then be generated by manifold ranking. To further improve the detection results, a guided filter is utilized to optimize the saliency map. Both qualitative and quantitative evaluations on three publicly challenging benchmark datasets demonstrate that the proposed method outperforms the contrast state-of-the-art methods, especially for detection in complex scenes. Numéro de notice : A2017-713 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-016-1288-y En ligne : https://doi.org/10.1007/s00371-016-1288-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88094
in The Visual Computer > vol 33 n° 11 (November 2017) . - pp 1415 - 1428[article]Sparse bayesian learning-based time-variant deconvolution / Sanyi Yuan in IEEE Transactions on geoscience and remote sensing, vol 55 n° 11 (November 2017)
[article]
Titre : Sparse bayesian learning-based time-variant deconvolution Type de document : Article/Communication Auteurs : Sanyi Yuan, Auteur ; Shangxu Wang, Auteur ; Ming Ma, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 6182 - 6194 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] amélioration du contraste
[Termes IGN] apprentissage automatique
[Termes IGN] déconvolutionRésumé : (Auteur) In seismic exploration, the wavelet-filtering effect and Q-filtering (amplitude attenuation and velocity dispersion) effect blur the reflection image of subsurface layers. Therefore, both wavelet- and Q-filtering effects should be reduced to retrieve a high-quality subsurface image, which is significant for fine reservoir interpretation. We derive a nonlinear time-variant convolution model to sparsely represent nonstationary seismograms in time domain involving these two effects and present a time-variant deconvolution (TVD) method based on sparse Bayesian learning (SBL) to solve the model to obtain a high-quality reflectivity image. The SBL-based TVD essentially obtains an optimum posterior mean of the reflectivity image, which is regarded as the inverted reflectivity result, by iteratively solving a Bayesian maximum posterior and a type-II maximum likelihood. Because a hierarchical Gaussian prior for reflectivity controlled by model-dependent hyper-parameters is adopted to approximately represent the fact that reflectivity is sparse, SBL-based TVD can retrieve a sparse reflectivity image through the principled sequential addition and deletion of Q-dependent time-variant wavelets. In general, strong reflectors are acquired relatively earlier, whereas weak reflectors and deep reflectors are imaged later. The method has the capacity to avoid false artifacts represented by sequential positive or negative reflectivity spikes with short two-way travel time, which typically occur within stationary deconvolution outcomes. Synthetic, laboratorial, and field data examples are used to demonstrate the effectiveness of the method and illustrate its advantages over SBL-based stationary deconvolution and TVD using an l2-norm or an l1-norm regularization. The results show that SBL-based TVD is a potentially effective, stable, and high-quality imaging tool. Numéro de notice : A2017-745 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2722223 En ligne : https://doi.org/10.1109/TGRS.2017.2722223 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88779
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 11 (November 2017) . - pp 6182 - 6194[article]Tubelets : Unsupervised action proposals from spatiotemporal super-voxels / Mihir Jain in International journal of computer vision, vol 124 n° 3 (15 September 2017)
[article]
Titre : Tubelets : Unsupervised action proposals from spatiotemporal super-voxels Type de document : Article/Communication Auteurs : Mihir Jain, Auteur ; Jan van Gemert, Auteur ; Hervé Jégou, Auteur ; Patrick Bouthemy, Auteur ; Cees G. M. Snoek, Auteur Année de publication : 2017 Article en page(s) : pp 287 - 311 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] données spatiotemporelles
[Termes IGN] reconnaissance de gestes
[Termes IGN] rectangle englobant minimum
[Termes IGN] séquence d'images
[Termes IGN] voxelRésumé : (Auteur) This paper considers the problem of localizing actions in videos as sequences of bounding boxes. The objective is to generate action proposals that are likely to include the action of interest, ideally achieving high recall with few proposals. Our contributions are threefold. First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets. Second, along with the static features from individual frames our approach advantageously exploits motion. We introduce independent motion evidence as a feature to characterize how the action deviates from the background and explicitly incorporate such motion information in various stages of the proposal generation. Finally, we introduce spatiotemporal refinement of Tubelets, for more precise localization of actions, and pruning to keep the number of Tubelets limited. We demonstrate the suitability of our approach by extensive experiments for action proposal quality and action localization on three public datasets: UCF Sports, MSR-II and UCF101. For action proposal quality, our unsupervised proposals beat all other existing approaches on the three datasets. For action localization, we show top performance on both the trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of MSR-II. Numéro de notice : A2017-812 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-017-1023-9 En ligne : https://doi.org/10.1007/s11263-017-1023-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89252
in International journal of computer vision > vol 124 n° 3 (15 September 2017) . - pp 287 - 311[article]Local Moebius transformations applied to omnidirectional images / Leonardo Souto Ferreira in Computers and graphics, vol 68 (November 2017)PermalinkDescribing contrast across scales / Sohaib Ali Syed in ISPRS Journal of photogrammetry and remote sensing, vol 128 (June 2017)PermalinkGeometric features and their relevance for 3D point cloud classification / Martin Weinmann in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-1/W1 (May 2017)PermalinkPermalinkAmélioration de la vitesse et de la qualité d'image du rendu basé image / Rodrigo Ortiz Cayón (2017)PermalinkPermalinkPermalinkFully automatic analysis of archival aerial images : Current status and challenges / Sébastien Giordano (2017)PermalinkPermalinkModeling spatial and temporal variabilities in hyperspectral image unmixing / Pierre-Antoine Thouvenin (2017)Permalink