Descripteur
Documents disponibles dans cette catégorie (1547)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Utilisation de données Sentinel-2 et SPOT 6/7 pour la classification de l’occupation du sol / Olivier Stocker (2019)
![]()
Titre : Utilisation de données Sentinel-2 et SPOT 6/7 pour la classification de l’occupation du sol Type de document : Mémoire Auteurs : Olivier Stocker, Auteur ; Arnaud Le Bris , Encadrant
Editeur : Champs-sur-Marne : Ecole nationale des sciences géographiques ENSG Année de publication : 2019 Importance : 70 p. Note générale : bibliographie
Rapport de stage Mastère spécialisé Photogrammétrie, Positionnement, Mesure de DéformationsLangues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] carte d'occupation du sol
[Termes IGN] fusion d'images
[Termes IGN] image Sentinel-MSI
[Termes IGN] image SPOT 6
[Termes IGN] image SPOT 7
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Cette étude porte sur le développement d’une architecture entièrement convolutive, adaptée au traitement de l’information spatiale apportée par la très haute résolution des capteurs SPOT 6 et 7. Cette architecture s’est montrée plus performante que les approches par fenêtre glissante dans la précision de la détection des objets topographiques, même en zone dense. Parallèlement, ces travaux montrent que l’ajout de contraintes permet de mieux délimiter les objets et que la finesse de la vérité terrain joue un grand rôle dans cette capacité de délimitation. Cette nouvelle architecture a également permis de générer, à partir de produits existants, des cartes de couverture du sol d’une qualité prometteuse. Les différents niveaux de richesse de nomenclatures évalués ont mis en avant une capacité de constance dans la segmentation sémantique. Enfin, ces travaux ont servi d’étude préliminaire à la fusion tardive et précoce des données SPOT 6/7 et Sentinel 2, dans l’objectif d’ajouter à la richesse spatiale, déjà efficace, une dimension spectrale. L’ensemble des contraintes liées à l’implantation entièrement convolutive de la fusion et les modifications à appliquer sur notre architecture ont été listées. Note de contenu : Introduction
1- Classification de l'occupation du sol
2- Données et traitement
3- Algorithmique
4- Segmentation sémantique entièrement convolutive
5- Segmentation par fusion
ConclusionNuméro de notice : 17344 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Mémoire PPMD Organisme de stage : LaSTIG (IGN) DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98316 Documents numériques
peut être téléchargé
Utilisation de données Sentinel-2 et SPOT 6/7 ... - pdf auteurAdobe Acrobat PDFVision-based localization with discriminative features from heterogeneous visual data / Nathan Piasco (2019)
![]()
Titre : Vision-based localization with discriminative features from heterogeneous visual data Type de document : Thèse/HDR Auteurs : Nathan Piasco , Auteur ; Valérie Gouet-Brunet
, Directeur de thèse ; Cédric Demonceaux, Directeur de thèse
Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 174 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée à l'école doctorale n° 37 de l'Université de Dijon pour l'obtention du Doctorat en instrumentation et informatique de l'imageLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme ICP
[Termes IGN] carte de profondeur
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données hétérogènes
[Termes IGN] estimation de pose
[Termes IGN] fonction de transfert de modulation
[Termes IGN] localisation basée image
[Termes IGN] localisation basée vision
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] vision monoculaireIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Visual-based Localization (VBL) consists in retrieving the location of a visual image within a known space. VBL is involved in several present-day practical applications, such as indoor and outdoor navigation, 3D reconstruction, etc. The main challenge in VBL comes from the fact that the visual input to localize could have been taken at a different time than the reference database. Visual changes may occur on the observed environment during this period of time, especially for outdoor localization. Recent approaches use complementary information in order to address these visually challenging localization scenarios, like geometric information or semantic information. However geometric or semantic information are not always available or can be costly to obtain. In order to get free of any extra modalities used to solve challenging localization scenarios, we propose to use a modality transfer model capable of reproducing the underlying scene geometry from a monocular image. At first, we cast the localization problem as a Content-based Image Retrieval (CBIR) problem and we train a CNN image descriptor with radiometry to dense geometry transfer as side training objective. Once trained, our system can be used on monocular images only to construct an expressive descriptor for localization in challenging conditions. Secondly, we introduce a new relocalization pipeline to improve the localization given by our initial localization step. In a same manner as our global image descriptor, the relocalization is aided by the geometric information learned during an offline stage. The extra geometric information is used to constrain the final pose estimation of the query. Through comprehensive experiments, we demonstrate the effectiveness of our proposals for both indoor and outdoor localization. Note de contenu : 1. Introduction
1.1 Long-term mapping
1.2 pLaTINUM project
1.3 Visual-based Localization with heterogeneous data
2. Review of Visual-Based Localization methods
2.1 Data Representation
2.2 VBL methods
2.3 Data with Dissimilar Appearances
2.4 Data heterogeneity
2.5 Discussion
2.6 Conclusion
3 Side modality learning for localization
3.1 Related work
3.2 Model architectures and training
3.3 Implementation details
3.4 Long-term localization
3.5 Night to day localization scenarios
3.6 Laser reflectance as side information
3.7 Conclusion
4. Pose refinement with learned depth map
4.1 Method
4.2 Relative pose estimation
4.3 Preliminary results
4.4 Indoor localization
4.5 Unsupervised training and outdoor localization
4.6 Discussion
4.7 Conclusion
5. Conclusion
5.1 Summary of the thesis
5.2 Scientific contributions
5.3 Future Research
A Network architectures
A.1 Global image descriptor network
A.2 Multitask pose refinement networkNuméro de notice : 26415 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Instrumentation et informatique de l'image : Dijon : 2019 Organisme de stage : LaSTIG (IGN) nature-HAL : Thèse DOI : sans Date de publication en ligne : 13/11/2020 En ligne : https://hal.science/tel-03003651/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96302 Remote sensing scene classification using multilayer stacked covariance pooling / Nanjun He in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
![]()
[article]
Titre : Remote sensing scene classification using multilayer stacked covariance pooling Type de document : Article/Communication Auteurs : Nanjun He, Auteur ; Leyuan Fang, Auteur ; Shutao Li, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 6899 - 6910 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] matrice de covariance
[Termes IGN] représentation cartographique
[Termes IGN] scèneRésumé : (auteur) This paper proposes a new method, called multilayer stacked covariance pooling (MSCP), for remote sensing scene classification. The innovative contribution of the proposed method is that it is able to naturally combine multilayer feature maps, obtained by pretrained convolutional neural network (CNN) models. Specifically, the proposed MSCP-based classification framework consists of the following three steps. First, a pretrained CNN model is used to extract multilayer feature maps. Then, the feature maps are stacked together, and a covariance matrix is calculated for the stacked features. Each entry of the resulting covariance matrix stands for the covariance of two different feature maps, which provides a natural and innovative way to exploit the complementary information provided by feature maps coming from different layers. Finally, the extracted covariance matrices are used as features for classification by a support vector machine. The experimental results, conducted on three challenging data sets, demonstrate that the proposed MSCP method can not only consistently outperform the corresponding single-layer model but also achieve better classification performance than other pretrained CNN-based scene classification methods. Numéro de notice : A2018-552 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2845668 Date de publication en ligne : 09/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2845668 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91640
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 6899 - 6910[article]Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning / Hailing Zhou in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)
![]()
[article]
Titre : Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning Type de document : Article/Communication Auteurs : Hailing Zhou, Auteur ; Lei Wei, Auteur ; Chee Peng Lim, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 7074 - 7085 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] accentuation d'image
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image captée par drone
[Termes IGN] méthode robuste
[Termes IGN] modèle sac-de-mots
[Termes IGN] objet mobile
[Termes IGN] PMVS
[Termes IGN] SIFT (algorithme)
[Termes IGN] transformation de Radon
[Termes IGN] véhiculeRésumé : (auteur) This paper presents a novel approach to automatically detect and count cars in different aerial images, which can be satellite or unmanned aerial vehicle (UAV) images. Variations in satellite and/or UAV data make it particularly challenging to have a robust method that works properly on a variety of images. A solution based on the bag-of-words (BoW) model is explored in this paper due to its invariance characteristic and highly stable performance in object/scene categorization. Different from categorization tasks, vehicle detection needs to localize the positions of cars in images. To make BoW suitable for this purpose, we extensively improve the methodology in three aspects, namely, by introducing a recently proposed feature representation, i.e., the local steering kernel descriptor, adding spatial structure constraints, and developing an orientation aware scanning mechanism to produce detection with “one-window-one-car” results. Experiments are conducted on various aerial images with large variations, which consist of data from two public databases, e.g., the Overhead Imagery Research Data Set and Vehicle Detection in Aerial Imagery, as well as other satellite and UAV images. The results demonstrate the effectiveness and robustness of the proposed method. Compared with existing techniques, the proposed method is applicable to a wider range of aerial images. Numéro de notice : A2018-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2848243 Date de publication en ligne : 17/07/2018 En ligne : http://dx.doi.org/10.1109/TGRS.2018.2848243 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91654
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 12 (December 2018) . - pp 7074 - 7085[article]Super-resolution of Sentinel-2 images : Learning a globally applicable deep neural network / Charis Lanaras in ISPRS Journal of photogrammetry and remote sensing, vol 146 (December 2018)
![]()
[article]
Titre : Super-resolution of Sentinel-2 images : Learning a globally applicable deep neural network Type de document : Article/Communication Auteurs : Charis Lanaras, Auteur ; José Bioucas-Dias, Auteur ; Silvano Galliani, Auteur ; Emmanuel P. Baltsavias, Auteur ; Konrad Schindler, Auteur Année de publication : 2018 Article en page(s) : pp 305 - 319 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] bande spectrale
[Termes IGN] échantillonnage de données
[Termes IGN] erreur moyenne quadratique
[Termes IGN] image à basse résolution
[Termes IGN] image Sentinel-MSI
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] pas d'échantillonnage au sol
[Termes IGN] pouvoir de résolution spectrale
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) The Sentinel-2 satellite mission delivers multi-spectral imagery with 13 spectral bands, acquired at three different spatial resolutions. The aim of this research is to super-resolve the lower-resolution (20 m and 60 m Ground Sampling Distance – GSD) bands to 10 m GSD, so as to obtain a complete data cube at the maximal sensor resolution. We employ a state-of-the-art convolutional neural network (CNN) to perform end-to-end upsampling, which is trained with data at lower resolution, i.e., from 40 20 m, respectively 360 60 m GSD. In this way, one has access to a virtually infinite amount of training data, by downsampling real Sentinel-2 images. We use data sampled globally over a wide range of geographical locations, to obtain a network that generalises across different climate zones and land-cover types, and can super-resolve arbitrary Sentinel-2 images without the need of retraining. In quantitative evaluations (at lower scale, where ground truth is available), our network, which we call DSen2, outperforms the best competing approach by almost 50% in RMSE, while better preserving the spectral characteristics. It also delivers visually convincing results at the full 10 m GSD. Numéro de notice : A2018-540 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.09.018 Date de publication en ligne : 21/10/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.09.018 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91554
in ISPRS Journal of photogrammetry and remote sensing > vol 146 (December 2018) . - pp 305 - 319[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018131 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018133 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018132 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Change detection based on stacked generalization system with segmentation constraint / Kun Tan in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 11 (November 2018)
PermalinkCoupling relationship among scale parameter, segmentation accuracy, and classification accuracy in GeOBIA / Ming Dongping in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 11 (November 2018)
PermalinkDepth-based hand pose estimation : Methods, data, and challenges / James Steven Supančič in International journal of computer vision, vol 126 n° 11 (November 2018)
PermalinkIndividual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images / Fabien Hubert Wagner in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part B (November 2018)
PermalinkLand cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models / Diego Marcos in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
PermalinkA new deep convolutional neural network for fast hyperspectral image classification / Mercedes Eugenia Paoletti in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
PermalinkPan-sharpening via deep metric learning / Yinghui Xing in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
PermalinkNovel fusion approach on automatic object extraction from spatial data: case study Worldview-2 and TOPO5000 / Umut Gunes Sefercik in Geocarto international, vol 33 n° 10 (October 2018)
PermalinkObject-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier / Huanxue Zhang in Geocarto international, vol 33 n° 10 (October 2018)
PermalinkStand age estimation of rubber (Hevea brasiliensis) plantations using an integrated pixel- and object-based tree growth model and annual Landsat time series / Gang Chen in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)
PermalinkAncient Chinese architecture 3D preservation by merging ground and aerial point clouds / Xiang Gao in ISPRS Journal of photogrammetry and remote sensing, vol 143 (September 2018)
PermalinkAssessment of Nigeriasat-1 satellite data for urban land use/land cover analysis using object-based image analysis in Abuja, Nigeria / Christopher Ifechukwude Chima in Geocarto international, vol 33 n° 9 (September 2018)
PermalinkConfigurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
Permalink3-D deep learning approach for remote sensing image classification / Amina Ben Hamida in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)
PermalinkAn improved temporal mixture analysis unmixing method for estimating impervious surface area based on MODIS and DMSP-OLS data / Li Zhuo in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)
PermalinkA deep learning approach to DTM extraction from imagery using rule-based training labels / Caroline M. Gevaert in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)
PermalinkSpectral-spatial classification of hyperspectral images using wavelet transform and hidden Markov random fields / Elham Kordi Ghasrodashti in Geocarto international, vol 33 n° 8 (August 2018)
PermalinkUnsupervised detection of ruptures in spatial relationships in video sequences based on log‑likelihood ratio / Abdalbassir Abou-Elailah in Pattern Analysis and Applications, vol 21 n° 3 (August 2018)
PermalinkEvolutionary approach for detection of buried remains using hyperspectral images / Leon Dozal in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 7 (juillet 2018)
PermalinkExtracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
PermalinkA fully automatic approach to register mobile mapping and airborne imagery to support the correction of plateform trajectories in GNSS-denied urban areas / Phillipp Jende in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
PermalinkA light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
PermalinkClassification à très large échelle d’images satellites à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)
PermalinkFusion tardive d’images SPOT 6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)
PermalinkSDF-2-SDF registration for real-time 3D reconstruction from RGB-D data / Miroslava Slavcheva in International journal of computer vision, vol 126 n° 6 (June 2018)
PermalinkDeep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)
PermalinkBinary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification / Rama Rao Nidamanuri in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)
PermalinkContextual classification using photometry and elevation data for damage detection after an earthquake event / Ewelina Rupnik in European journal of remote sensing, vol 51 n° 1 (2018)
PermalinkA crowdsourcing-based game for land cover validation / Maria Antonia Brovelli in Applied geomatics, vol 10 n° 1 (March 2018)
PermalinkHarmonic regression of Landsat time series for modeling attributes from national forest inventory data / Barry T. Wilson in ISPRS Journal of photogrammetry and remote sensing, vol 137 (March 2018)
PermalinkImage classification-based ground filtering of point clouds extracted from UAV-based aerial photos / Volkan Yilmaz in Geocarto international, vol 33 n° 3 (March 2018)
PermalinkSensitivity analysis of pansharpening in hyperspectral change detection / Seyd Teymoor Seydi in Applied geomatics, vol 10 n° 1 (March 2018)
PermalinkAnalyse de l'incertitude et de la précision thématique de classifications GEOBIA d'une image WorldView-2 / François Messner in Revue Française de Photogrammétrie et de Télédétection, n° 216 (février 2018)
PermalinkFine-grained object recognition and zero-shot learning in remote sensing imagery / Gencer Sumbul in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
PermalinkLRAGE : learning latent relationships with adaptive graph embedding for aerial scene classification / Yuebin Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
PermalinkA survey on visual-based localization : on the benefit of heterogeneous data / Nathan Piasco in Pattern recognition, vol 74 (February 2018)
PermalinkActive learning-based optimized training library generation for object-oriented image classification / Rajeswari Balasubramaniam in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)
PermalinkCartographier l'occupation du sol à grande échelle : optimisation de la photo-interprétation par segmentation d'image / Maxime Vitter (2018)
PermalinkPermalinkClassification à très haute résolution (THR) spatiale et fusion d'occupation des sols (OCS) / Tristan Postadjian (2018)
PermalinkClassification à très large échelle d'images satellite à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian (2018)
![]()
PermalinkComparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)
PermalinkDecision fusion of SPOT6 and multitemporal Sentinel2 images for urban area detection / Cyril Wendl (2018)
PermalinkPermalinkPermalink