Descripteur
Documents disponibles dans cette catégorie (1366)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
3D hand mesh reconstruction from a monocular RGB image / Hao Peng in The Visual Computer, vol 36 n° 10 - 12 (October 2020)
[article]
Titre : 3D hand mesh reconstruction from a monocular RGB image Type de document : Article/Communication Auteurs : Hao Peng, Auteur ; Chuhua Xian, Auteur ; Yunbo Zhang, Auteur Année de publication : 2020 Article en page(s) : pp pages2227 - 2239 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] estimation de pose
[Termes IGN] image de synthèse
[Termes IGN] image RVB
[Termes IGN] maillage
[Termes IGN] modélisation 3D
[Termes IGN] réalité augmentée
[Termes IGN] réalité virtuelle
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] vision monoculaireRésumé : (auteur) Most of the existing methods for 3D hand analysis based on RGB images mainly focus on estimating hand keypoints or poses, which cannot capture geometric details of the 3D hand shape. In this work, we propose a novel method to reconstruct a 3D hand mesh from a single monocular RGB image. Different from current parameter-based or pose-based methods, our proposed method directly estimates the 3D hand mesh based on graph convolution neural network (GCN). Our network consists of two modules: the hand localization and mask generation module, and the 3D hand mesh reconstruction module. The first module, which is a VGG16-based network, is applied to localize the hand region in the input image and generate the binary mask of the hand. The second module takes the high-order features from the first and uses a GCN-based network to estimate the coordinates of each vertex of the hand mesh and reconstruct the 3D hand shape. To achieve better accuracy, a novel loss based on the differential properties of the discrete mesh is proposed. We also use professional software to create a large synthetic dataset that contains both ground truth 3D hand meshes and poses for training. To handle the real-world data, we use the CycleGAN network to transform the data domain of real-world images to that of our synthesis dataset. We demonstrate that our method can produce accurate 3D hand mesh and achieve an efficient performance for real-time applications. Numéro de notice : A2020-596 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-020-01908-3 Date de publication en ligne : 14/07/2020 En ligne : https://doi.org/10.1007/s00371-020-01908-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95936
in The Visual Computer > vol 36 n° 10 - 12 (October 2020) . - pp pages2227 - 2239[article]Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)
[article]
Titre : Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution Type de document : Article/Communication Auteurs : Vitor Martins, Auteur ; Amy L. Kaleita, Auteur ; Brian K. Gelder, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 56 - 73 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données multiéchelles
[Termes IGN] hétérogénéité environnementale
[Termes IGN] image à haute résolution
[Termes IGN] occupation du sol
[Termes IGN] reconnaissance d'objets
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] squelettisationRésumé : (auteur) Convolutional Neural Network (CNN) has been increasingly used for land cover mapping of remotely sensed imagery. However, large-area classification using traditional CNN is computationally expensive and produces coarse maps using a sliding window approach. To address this problem, object-based CNN (OCNN) becomes an alternative solution to improve classification performance. However, previous studies were mainly focused on urban areas or small scenes, and implementation of OCNN method is still needed for large-area classification over heterogeneous landscape. Additionally, the massive labeling of segmented objects requires a practical approach for less computation, including object analysis and multiple CNNs. This study presents a new multiscale OCNN (multi-OCNN) framework for large-scale land cover classification at 1-m resolution over 145,740 km2. Our approach consists of three main steps: (i) image segmentation, (ii) object analysis with skeleton-based algorithm, and (iii) application of multiple CNNs for final classification. Also, we developed a large benchmark dataset, called IowaNet, with 1 million labeled images and 10 classes. In our approach, multiscale CNNs were trained to capture the best contextual information during the semantic labeling of objects. Meanwhile, skeletonization algorithm provided morphological representation (“medial axis”) of objects to support the selection of convolutional locations for CNN predictions. In general, proposed multi-OCNN presented better classification accuracy (overall accuracy ~87.2%) compared to traditional patch-based CNN (81.6%) and fixed-input OCNN (82%). In addition, the results showed that this framework is 8.1 and 111.5 times faster than traditional pixel-wise CNN16 or CNN256, respectively. Multiple CNNs and object analysis have proved to be essential for accurate and fast classification. While multi-OCNN produced a high-level of spatial details in the land cover product, misclassification was observed for some classes, such as road versus buildings or shadow versus lake. Despite these minor drawbacks, our results also demonstrated the benefits of IowaNet training dataset in the model performance; overfitting process reduces as the number of samples increases. The limitations of multi-OCNN are partially explained by segmentation quality and limited number of spectral bands in the aerial data. With the advance of deep learning methods, this study supports the claim of multi-OCNN benefits for operational large-scale land cover product at 1-m resolution. Numéro de notice : A2020-634 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.08.004 Date de publication en ligne : 13/08/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.08.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96057
in ISPRS Journal of photogrammetry and remote sensing > vol 168 (October 2020) . - pp 56 - 73[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020103 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020102 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt fusionImage: An R package for pan‐sharpening images in open source software / Fulgencio Cánovas‐García in Transactions in GIS, Vol 24 n° 5 (October 2020)
[article]
Titre : fusionImage: An R package for pan‐sharpening images in open source software Type de document : Article/Communication Auteurs : Fulgencio Cánovas‐García, Auteur ; Paúl Pesántez‐Cobos, Auteur ; Francisco Alonso‐Sarría, Auteur Année de publication : 2020 Article en page(s) : pp 1185-1207 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de Gram-Schmidt
[Termes IGN] analyse en composantes principales
[Termes IGN] filtre passe-haut
[Termes IGN] logiciel libre
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] R (langage)Résumé : (Auteur) The objective of this article is to evaluate the performance of three pan‐sharpening algorithms (high‐pass filter, principal component analysis and Gram–Schmidt) to increase the spatial resolution of five types of multispectral images and to evaluate the results in terms of color, coherence and spatial sharpness, both qualitatively and quantitatively. A secondary objective is to present an implementation of the aforementioned pan‐sharpening techniques within the open source software R. From a qualitative point of view, pan‐sharpening of images with a high spatial resolution ratio give better results than those whose spatial resolution ratio is 2. According to the quantitative evaluation, there is no pan‐sharpening methodology that obtains optimal results simultaneously for all types of images used. The results of the spectral and spatial ERGAS index vary for four out of the five types of images analyzed. The results show that none of the methods implemented in this work can be considered a priori better than the others. At the same time, this work indicates the importance of both qualitative and quantitative assessment. Numéro de notice : A2020-499 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12676 Date de publication en ligne : 15/09/2020 En ligne : https://doi.org/10.1111/tgis.12676 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96206
in Transactions in GIS > Vol 24 n° 5 (October 2020) . - pp 1185-1207[article]Multiview automatic target recognition for infrared imagery using collaborative sparse priors / Xuelu Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)
[article]
Titre : Multiview automatic target recognition for infrared imagery using collaborative sparse priors Type de document : Article/Communication Auteurs : Xuelu Li, Auteur ; Vishal Monga, Auteur ; Abhijit Mahalanobis, Auteur Année de publication : 2020 Article en page(s) : pp 6776 - 6790 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] ajustement de paramètres
[Termes IGN] apprentissage profond
[Termes IGN] détection de cible
[Termes IGN] données clairsemées
[Termes IGN] estimation bayesienne
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à basse résolution
[Termes IGN] image infrarouge
[Termes IGN] reconnaissance automatiqueRésumé : (auteur) The low resolution of infrared (IR) images makes feature extraction for classification of a challenging work. Learning-based methods, therefore, are preferred to be used on such raw imagery. In this article, in order to avoid difficulties in feature extraction, a novel multitask extension of the widely used sparse-representation-classification (SRC) method is proposed in both single and multiview set-ups. That is, the test sample could be a single IR image or images from different views. In both single-view and multiview scenarios, we try to employ collaborative spike and slab priors. This is because the traditional sparsity-inducing measures such as the l0 -row pseudonorm makes it hard to capture the sparse structure of the coefficient matrix when expanded in terms of a training dictionary, and the priors are proved to be able to capture fairly general sparse structures. Furthermore, a joint prior and sparse coefficient estimation method (JPCEM) is proposed for the first time in this article in order to alleviate the need to handpick prior parameters required before classification. Multiple experiments are conducted on a synthetic Comanche Forward Looking IR (FLIR) Automatic Target Recognition (ATR) database collected by Army Research Lab and a challenging mid-wave IR (MWIR) image ATR database made available by the U.S. Army Night Vision and Electronic Sensors Directorate. The final results substantiate the merits of the proposed JPCEM through comparisons with other state-of-the-art methods, including both the ones based on SRC and the ones constructed using deep learning frameworks. Numéro de notice : A2020-584 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2973969 Date de publication en ligne : 26/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2973969 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95908
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 10 (October 2020) . - pp 6776 - 6790[article]A novel spectral–spatial based adaptive minimum spanning forest for hyperspectral image classification / Jing Lv in Geoinformatica, vol 24 n° 4 (October 2020)
[article]
Titre : A novel spectral–spatial based adaptive minimum spanning forest for hyperspectral image classification Type de document : Article/Communication Auteurs : Jing Lv, Auteur ; Huimin Zhang, Auteur ; Ming Yang, Auteur ; Wanqi Yang, Auteur Année de publication : 2020 Article en page(s) : pp 827 - 848 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] arbre aléatoire minimum
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] classification pixellaire
[Termes IGN] image hyperspectrale
[Termes IGN] segmentation d'imageRésumé : (Auteur) The classification methods based on minimum spanning forest (MSF) have yielded impressive results for hyperspectral image. However, previous methods exist several drawbacks, i.e., marker selection methods are easily affected by boundary noise pixels, dissimilarity measure methods between pixels are inaccurate, and also image segmentation process is not robust, since they have not effectively utilized spatial information. To this end, in this paper, novel gradient-based marker selection technique, dissimilarity measures, and adaptive connection weighting method are proposed by making full use of spatial information in hyperspectral image. Concretely, for a given hyperspectral image, a pixel-wise classification is firstly performed, and meanwhile the gradient map is generated by a morphology-based algorithm. Secondly, the most reliable pixels are selected as the markers from the classification map, and then the boundary noise pixels are excluded from the marker map by using the gradient map. Thirdly, several new dissimilarity measures are proposed by incorporating gradient information or probability information of pixels. Furthermore, in the growth procedure of MSF, the connection weighting between pixels is adjusted adaptively to improve the robustness of the MSF algorithm. Finally, when building the final classification map by using the majority voting rule, the labels of the training samples are used to dominate the label prediction. Experimental results are performed on two hyperspectral image sets Indian Pines and University of Pavia with different resolutions and contexts. The proposed approach yields higher classification accuracies compared to previously proposed classification methods, and provides accurate segmentation maps. Numéro de notice : A2020-496 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s10707-020-00403-0 Date de publication en ligne : 11/05/2020 En ligne : https://doi.org/10.1007/s10707-020-00403-0 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96117
in Geoinformatica > vol 24 n° 4 (October 2020) . - pp 827 - 848[article]A spatially explicit surface urban heat island database for the United States: Characterization, uncertainties, and possible applications / T. Chakraborty in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)PermalinkHyperspectral unmixing using orthogonal sparse prior-based autoencoder with hyper-laplacian loss and data-driven outlier detection / Zeyang Dou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkLocal color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county? / Istvan G. Lauko in Geo-spatial Information Science, vol 23 n° 3 (September 2020)PermalinkPansharpening: context-based generalized Laplacian pyramids by robust regression / Gemine Vivone in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkPost‐filtering with surface orientation constraints for stereo dense image matching / Xu Huang in Photogrammetric record, vol 35 n° 171 (September 2020)PermalinkPrecise extraction of citrus fruit trees from a Digital Surface Model using a unified strategy: detection, delineation, and clustering / Ali Ozgun Ok in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)PermalinkSemi-automatic building extraction from WorldView-2 imagery using taguchi optimization / Hasan Tonbul in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)PermalinkWater level prediction from social media images with a multi-task ranking approach / P. Chaudhary in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkShoreline extraction from WorldView2 satellite data in the presence of foam pixels using multispectral classification method / Audrey Minghelli in Remote sensing, vol 12 n° 16 (August-2 2020)PermalinkCan SPOT-6/7 CNN semantic segmentation improve Sentinel-2 based land cover products? sensor assessment and fusion / Olivier Stocker in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkCNN semantic segmentation to retrieve past land cover out of historical orthoimages and DSM: first experiments / Arnaud Le Bris in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkCorrection of systematic radiometric inhomogeneity in scanned aerial campaigns using principal component analysis / Lâmân Lelégard in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkExtraction of built-up areas from Landsat-8 OLI data based on spectral-textural information and feature selection using support vector machine method / Vijendra Singh Bramhe in Geocarto international, vol 35 n° 10 ([01/08/2020])PermalinkExtraction of urban built-up areas from nighttime lights using artificial neural network / Tingting Xu in Geocarto international, vol 35 n° 10 ([01/08/2020])PermalinkStructure from motion for complex image sets / Mario Michelini in ISPRS Journal of photogrammetry and remote sensing, vol 166 (August 2020)PermalinkClassification of hyperspectral and LiDAR data using coupled CNNs / Renlong Hang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkCross-calibration of MODIS reflective solar bands with Sentinel 2A/2B MSI instruments / Amit Angal in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkDense stereo matching strategy for oblique images that considers the plane directions in urban areas / Jianchen Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkImproved depth estimation for occlusion scenes using a light-field camera / Changkun Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 7 (July 2020)PermalinkSubpixel-pixel-superpixel-based multiview active learning for hyperspectral images classification / Yu Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)Permalink