Descripteur
Documents disponibles dans cette catégorie (2096)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Variational learning of mixture wishart model for PolSAR image classification / Qian Wu in IEEE Transactions on geoscience and remote sensing, vol 57 n° 1 (January 2019)
[article]
Titre : Variational learning of mixture wishart model for PolSAR image classification Type de document : Article/Communication Auteurs : Qian Wu, Auteur ; Biao Hou, Auteur ; Zaidao Wen, Auteur ; Licheng Jiao, Auteur Année de publication : 2019 Article en page(s) : pp 141 - 154 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] classification
[Termes IGN] image AIRSAR
[Termes IGN] image radar moirée
[Termes IGN] image Radarsat
[Termes IGN] loi de Wishart
[Termes IGN] optimisation (mathématiques)
[Termes IGN] polarimétrie radarRésumé : (Auteur) The phase difference, amplitude product, and amplitude ratio between two polarizations are important discriminators for terrain classification, which derives a significant statistical-distribution-based polarimetric synthetic aperture radar (PolSAR) image classification. Traditionally, statistical-distribution-based PolSAR image classification models pay attention to two aspects: searching for a suitable distribution to model certain PolSAR image and a satisfactory solution for the corresponding distribution model with samples in every terrain. Usually, the described distribution form is too complicated to build. Besides, inaccurate parameter estimation may lead to poor classification performance for PolSAR image. In order to refrain from this phenomenon, a variational thought is adopted for the statistical-distribution-based PolSAR classification method in this paper. First, a mixture Wishart model is built to model the PolSAR image to replace the complicated distribution for the PolSAR image. Second, a learning-based method is suggested instead of inaccurate point estimation of parameters to determine the distribution for every class in the mixture Wishart model. Finally, the proposed learning-based mixture Wishart model will be built as a variational form to realize a parametric model for PolSAR image classification. In the experiments, it will be proved that the class centers are easier to distinguish among different terrains learned from the proposed variational model. In addition, a classification performance on the PolSAR image is superior to the original point estimation Wishart model on both visual classification result and accuracy. Numéro de notice : A2019-104 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2852633 Date de publication en ligne : 16/08/2018 En ligne : https://doi.org/10.1109/TGRS.2018.2852633 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92410
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 1 (January 2019) . - pp 141 - 154[article]Vision-based localization with discriminative features from heterogeneous visual data / Nathan Piasco (2019)
Titre : Vision-based localization with discriminative features from heterogeneous visual data Type de document : Thèse/HDR Auteurs : Nathan Piasco , Auteur ; Valérie Gouet-Brunet , Directeur de thèse ; Cédric Demonceaux, Directeur de thèse Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 174 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée à l'école doctorale n° 37 de l'Université de Dijon pour l'obtention du Doctorat en instrumentation et informatique de l'imageLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme ICP
[Termes IGN] carte de profondeur
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données hétérogènes
[Termes IGN] estimation de pose
[Termes IGN] fonction de transfert de modulation
[Termes IGN] localisation basée image
[Termes IGN] localisation basée vision
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] vision monoculaireIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Visual-based Localization (VBL) consists in retrieving the location of a visual image within a known space. VBL is involved in several present-day practical applications, such as indoor and outdoor navigation, 3D reconstruction, etc. The main challenge in VBL comes from the fact that the visual input to localize could have been taken at a different time than the reference database. Visual changes may occur on the observed environment during this period of time, especially for outdoor localization. Recent approaches use complementary information in order to address these visually challenging localization scenarios, like geometric information or semantic information. However geometric or semantic information are not always available or can be costly to obtain. In order to get free of any extra modalities used to solve challenging localization scenarios, we propose to use a modality transfer model capable of reproducing the underlying scene geometry from a monocular image. At first, we cast the localization problem as a Content-based Image Retrieval (CBIR) problem and we train a CNN image descriptor with radiometry to dense geometry transfer as side training objective. Once trained, our system can be used on monocular images only to construct an expressive descriptor for localization in challenging conditions. Secondly, we introduce a new relocalization pipeline to improve the localization given by our initial localization step. In a same manner as our global image descriptor, the relocalization is aided by the geometric information learned during an offline stage. The extra geometric information is used to constrain the final pose estimation of the query. Through comprehensive experiments, we demonstrate the effectiveness of our proposals for both indoor and outdoor localization. Note de contenu : 1. Introduction
1.1 Long-term mapping
1.2 pLaTINUM project
1.3 Visual-based Localization with heterogeneous data
2. Review of Visual-Based Localization methods
2.1 Data Representation
2.2 VBL methods
2.3 Data with Dissimilar Appearances
2.4 Data heterogeneity
2.5 Discussion
2.6 Conclusion
3 Side modality learning for localization
3.1 Related work
3.2 Model architectures and training
3.3 Implementation details
3.4 Long-term localization
3.5 Night to day localization scenarios
3.6 Laser reflectance as side information
3.7 Conclusion
4. Pose refinement with learned depth map
4.1 Method
4.2 Relative pose estimation
4.3 Preliminary results
4.4 Indoor localization
4.5 Unsupervised training and outdoor localization
4.6 Discussion
4.7 Conclusion
5. Conclusion
5.1 Summary of the thesis
5.2 Scientific contributions
5.3 Future Research
A Network architectures
A.1 Global image descriptor network
A.2 Multitask pose refinement networkNuméro de notice : 26415 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Instrumentation et informatique de l'image : Dijon : 2019 Organisme de stage : LaSTIG (IGN) nature-HAL : Thèse DOI : sans Date de publication en ligne : 13/11/2020 En ligne : https://hal.science/tel-03003651/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96302 Remote sensing scene classification using multilayer stacked covariance pooling / Nanjun He in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Remote sensing scene classification using multilayer stacked covariance pooling Type de document : Article/Communication Auteurs : Nanjun He, Auteur ; Leyuan Fang, Auteur ; Shutao Li, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 6899 - 6910 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] matrice de covariance
[Termes IGN] représentation cartographique
[Termes IGN] scèneRésumé : (auteur) This paper proposes a new method, called multilayer stacked covariance pooling (MSCP), for remote sensing scene classification. The innovative contribution of the proposed method is that it is able to naturally combine multilayer feature maps, obtained by pretrained convolutional neural network (CNN) models. Specifically, the proposed MSCP-based classification framework consists of the following three steps. First, a pretrained CNN model is used to extract multilayer feature maps. Then, the feature maps are stacked together, and a covariance matrix is calculated for the stacked features. Each entry of the resulting covariance matrix stands for the covariance of two different feature maps, which provides a natural and innovative way to exploit the complementary information provided by feature maps coming from different layers. Finally, the extracted covariance matrices are used as features for classification by a support vector machine. The experimental results, conducted on three challenging data sets, demonstrate that the proposed MSCP method can not only consistently outperform the corresponding single-layer model but also achieve better classification performance than other pretrained CNN-based scene classification methods. Numéro de notice : A2018-552 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2845668 Date de publication en ligne : 09/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2845668 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91640
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 6899 - 6910[article]Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning / Hailing Zhou in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning Type de document : Article/Communication Auteurs : Hailing Zhou, Auteur ; Lei Wei, Auteur ; Chee Peng Lim, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 7074 - 7085 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] accentuation d'image
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image captée par drone
[Termes IGN] méthode robuste
[Termes IGN] modèle sac-de-mots
[Termes IGN] objet mobile
[Termes IGN] PMVS
[Termes IGN] SIFT (algorithme)
[Termes IGN] transformation de Radon
[Termes IGN] véhiculeRésumé : (auteur) This paper presents a novel approach to automatically detect and count cars in different aerial images, which can be satellite or unmanned aerial vehicle (UAV) images. Variations in satellite and/or UAV data make it particularly challenging to have a robust method that works properly on a variety of images. A solution based on the bag-of-words (BoW) model is explored in this paper due to its invariance characteristic and highly stable performance in object/scene categorization. Different from categorization tasks, vehicle detection needs to localize the positions of cars in images. To make BoW suitable for this purpose, we extensively improve the methodology in three aspects, namely, by introducing a recently proposed feature representation, i.e., the local steering kernel descriptor, adding spatial structure constraints, and developing an orientation aware scanning mechanism to produce detection with “one-window-one-car” results. Experiments are conducted on various aerial images with large variations, which consist of data from two public databases, e.g., the Overhead Imagery Research Data Set and Vehicle Detection in Aerial Imagery, as well as other satellite and UAV images. The results demonstrate the effectiveness and robustness of the proposed method. Compared with existing techniques, the proposed method is applicable to a wider range of aerial images. Numéro de notice : A2018-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2848243 Date de publication en ligne : 17/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2848243 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91654
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7074 - 7085[article]Scene classification based on multiscale convolutional neural network / Yanfei Liu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
[article]
Titre : Scene classification based on multiscale convolutional neural network Type de document : Article/Communication Auteurs : Yanfei Liu, Auteur ; Yanfei Zhong, Auteur ; Qianqing Qin, Auteur Année de publication : 2018 Article en page(s) : pp 7109 - 7121 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] image multidimensionnelle
[Termes IGN] image satellite
[Termes IGN] mesure de similitude
[Termes IGN] modèle orienté objetRésumé : (auteur) With the large amount of high-spatial resolution images now available, scene classification aimed at obtaining high-level semantic concepts has drawn great attention. The convolutional neural networks (CNNs), which are typical deep learning methods, have widely been studied to automatically learn features for the images for scene classification. However, scene classification based on CNNs is still difficult due to the scale variation of the objects in remote sensing imagery. In this paper, a multiscale CNN (MCNN) framework is proposed to solve the problem. In MCNN, a network structure containing dual branches of a fixed-scale net (F-net) and a varied-scale net (V-net) is constructed and the parameters are shared by the F-net and V-net. The images and their rescaled images are fed into the F-net and V-net, respectively, allowing us to simultaneously train the shared network weights on multiscale images. Furthermore, to ensure that the features extracted from MCNN are scale invariant, a similarity measure layer is added to MCNN, which forces the two feature vectors extracted from the image and its corresponding rescaled image to be as close as possible in the training phase. To demonstrate the effectiveness of the proposed method, we compared the results obtained using three widely used remote sensing data sets: the UC Merced data set, the aerial image data set, and the google data set of SIRI-WHU. The results confirm that the proposed method performs significantly better than the other state-of-the-art scene classification methods. Numéro de notice : A2018-556 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2848473 Date de publication en ligne : 26/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2848473 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91660
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7109 - 7121[article]A hybrid ensemble learning method for tourist route recommendations based on geo-tagged social networks / Lin Wan in International journal of geographical information science IJGIS, vol 32 n° 11-12 (November - December 2018)PermalinkIndividual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images / Fabien Hubert Wagner in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part B (November 2018)PermalinkA new deep convolutional neural network for fast hyperspectral image classification / Mercedes Eugenia Paoletti in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)PermalinkPan-sharpening via deep metric learning / Yinghui Xing in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)PermalinkA semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)PermalinkA 3D convolutional neural network method for land cover classification using LiDAR and multi-temporal Landsat imagery / Zewei Xu in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)PermalinkEstimating forest canopy cover in black locust (Robinia pseudoacacia L.) plantations on the loess plateau using random forest / Qingxia Zhao in Forests, vol 9 n° 10 (October 2018)PermalinkEstimation of forest above-ground biomass by geographically weighted regression and machine learning with Sentinel imagery / Lin Chen in Forests, vol 9 n° 10 (October 2018)PermalinkObject-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier / Huanxue Zhang in Geocarto international, vol 33 n° 10 (October 2018)PermalinkPredicting tree diameter distributions from airborne laser scanning, SPOT 5 satellite, and field sample data in the perm region, Russia / Jussi Peuhkurinen in Forests, vol 9 n° 10 (October 2018)Permalink