Descripteur
Documents disponibles dans cette catégorie (15)



Etendre la recherche sur niveau(x) vers le bas
Polyline simplification based on the artificial neural network with constraints of generalization knowledge / Jiawei Du in Cartography and Geographic Information Science, Vol 49 n° 4 (July 2022)
![]()
[article]
Titre : Polyline simplification based on the artificial neural network with constraints of generalization knowledge Type de document : Article/Communication Auteurs : Jiawei Du, Auteur ; Jichong Yin, Auteur ; Chengyi Liu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 313 - 337 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] descripteur
[Termes IGN] données maillées
[Termes IGN] données vectorielles
[Termes IGN] généralisation cartographique automatisée
[Termes IGN] polyligne
[Termes IGN] programmation par contraintes
[Termes IGN] réseau neuronal artificiel
[Termes IGN] simplification de contour
[Vedettes matières IGN] GénéralisationRésumé : (auteur) The present paper presents techniques for polyline simplification based on an artificial neural network within the constraints of generalization knowledge. The proposed method measures polyline shape characteristics that influence polyline simplification using abstracted descriptors and then introduces these descriptors into the artificial neural network as input properties. In total, 18 descriptors categorized into three types are presented in detail. In a second approach, map simplification principles are abstracted as controllers, imposed after the output layer of the trained artificial neural network to make the polyline simplification comply with these principles. This study worked with three controllers – a basic controller and two knowledge-based controllers. These descriptors and controllers abstracted from generalization knowledge were tested in experiments to determine their efficacy in polyline simplification based on the artificial neural network. The experimental results show that the utilization of abstracted descriptors and controllers can constrain the artificial neural network-based polyline simplification according to polyline shape characteristics and simplification principles. Numéro de notice : A2022-479 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : https://doi.org/10.1080/15230406.2021.2013944 Date de publication en ligne : 17/01/2022 En ligne : https://doi.org/10.1080/15230406.2021.2013944 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100885
in Cartography and Geographic Information Science > Vol 49 n° 4 (July 2022) . - pp 313 - 337[article]Enriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)
![]()
[article]
Titre : Enriching the metadata of map images: a deep learning approach with GIS-based data augmentation Type de document : Article/Communication Auteurs : Yingjie Hu, Auteur ; Zhipeng Gui, Auteur ; Jimin Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 799 - 821 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] descripteur
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] image cartographique
[Termes IGN] métadonnées
[Termes IGN] projection
[Termes IGN] système d'information géographique
[Termes IGN] Web Map Service
[Termes IGN] web mappingRésumé : (auteur) Maps in the form of digital images are widely available in geoportals, Web pages, and other data sources. The metadata of map images, such as spatial extents and place names, are critical for their indexing and searching. However, many map images have either mismatched metadata or no metadata at all. Recent developments in deep learning offer new possibilities for enriching the metadata of map images via image-based information extraction. One major challenge of using deep learning models is that they often require large amounts of training data that have to be manually labeled. To address this challenge, this paper presents a deep learning approach with GIS-based data augmentation that can automatically generate labeled training map images from shapefiles using GIS operations. We utilize such an approach to enrich the metadata of map images by adding spatial extents and place names extracted from map images. We evaluate this GIS-based data augmentation approach by using it to train multiple deep learning models and testing them on two different datasets: a Web Map Service image dataset at the continental scale and an online map image dataset at the state scale. We then discuss the advantages and limitations of the proposed approach. Numéro de notice : A2022-258 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : https://doi.org/10.1080/13658816.2021.1968407 En ligne : https://doi.org/10.1080/13658816.2021.1968407 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100231
in International journal of geographical information science IJGIS > vol 36 n° 4 (April 2022) . - pp 799 - 821[article]
Titre : Cross-dataset learning for generalizable land use scene classification Type de document : Article/Communication Auteurs : Dimitri Gominski , Auteur ; Valérie Gouet-Brunet
, Auteur ; Liming Chen, Auteur
Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2022 Projets : Alegoria / Gouet-Brunet, Valérie Conférence : EarthVision 2022, Large Scale Computer Vision for Remote Sensing Imagery, workshop joint to CVPR 2022 19/06/2022 24/06/2022 New Orleans Louisiane - Etats-Unis OA Proceedings Importance : 10 p. Note générale : EarthVision'22 workshop in conjunction with the Computer Vision and Pattern Recognition (CVPR) 2022 Conference Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] cadre conceptuel
[Termes IGN] descripteur
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] intelligence artificielle
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] utilisation du solRésumé : (auteur) Few-shot and cross-domain land use scene classification methods propose solutions to classify unseen classes or uneen visual distributions, but are hardly applicable to real-world situations due to restrictive assumptions. Few-shot methods involve episodic training on restrictive training subsets with small feature extractors, while cross-domain methods are only applied to common classes. The underlying challenge remains open: can we accurately classify new scenes on new datasets? In this paper, we propose a new framework for few-shot, cross-domain classification. Our retrieval-inspired approach exploits the interrelations in both the training and testing data to output class labels using compact descriptors. Results show that our method can accurately produce land-use predictions on unseen datasets and unseen classes, going beyond the traditional few-shot or cross-domain formulation, and allowing cross-dataset training. Numéro de notice : C2022-031 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : en attente En ligne : https://openaccess.thecvf.com/content/CVPR2022W/EarthVision/papers/Gominski_Cros [...] Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101087 Connecting images through time and sources: Introducing low-data, heterogeneous instance retrieval / Dimitri Gominski (2021)
![]()
Titre : Connecting images through time and sources: Introducing low-data, heterogeneous instance retrieval Type de document : Article/Communication Auteurs : Dimitri Gominski , Auteur ; Valérie Gouet-Brunet
, Auteur ; Liming Chen, Auteur
Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2021 Projets : Alegoria / Gouet-Brunet, Valérie Importance : 5 p. Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données d'images
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] descripteur
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données hétérogènes
[Termes IGN] exploration de données
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image multi sources
[Termes IGN] indexation sémantique
[Termes IGN] précision de la classification
[Termes IGN] recherche d'image basée sur le contenuRésumé : (auteur) With impressive results in applications relying on feature learning, deep learning has also blurred the line between algorithm and data. Pick a training dataset, pick a backbone network for feature extraction, and voilà; this usually works fora variety of use cases. But the underlying hypothesis that there exists a training dataset matching the use case is not alwaysmet. Moreover, the demand for interconnections regardless of the variations of the content calls for increasing generalization and robustness in features. An interesting application characterized by these problematics is the connection of historical and cultural databases of images.Through the seemingly simple task of instance retrieval, wepropose to show that it is not trivial to pick features respondingwell to a panel of variations and semantic content. Introducing anew enhanced version of the ALEGORIA benchmark, we compare descriptors using the detailed annotations. We further give in sights about the core problems in instance retrieval, testing fourstate-of-the-art additional techniques to increase performance. Numéro de notice : P2021-001 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Preprint nature-HAL : Préprint DOI : sans Date de publication en ligne : 21/03/2021 En ligne : https://arxiv.org/pdf/2103.10729.pdf Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97398
Titre : Deep learning for feature based image matching Type de document : Thèse/HDR Auteurs : Lin Chen, Auteur ; Christian Heipke, Directeur de thèse Editeur : Munich : Bayerische Akademie der Wissenschaften Année de publication : 2021 Collection : DGK - C, ISSN 0065-5325 num. 867 Importance : 159 p. Format : 21 x 30 cm Note générale : bibliographie
Diese Arbeit ist gleichzeitig veröffentlicht in: Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik der Leibniz UniversitätHannoverISSN 0174-1454, Nr. 369, Hannover 2021Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] appariement d'images
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] descripteur
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] orthoimageRésumé : (auteur) Feature based image matching aims at finding matched features between two or more images. It is one of the most fundamental research topics in photogrammetry and computer vision. The matching features area prerequisite for applications such as image orientation, Simultaneous Localization and Mapping (SLAM) and robot vision. A typical feature based matching algorithm is composed of five steps: feature detection, affine shape estimation, orientation, description and descriptor matching. Today, the employment of deep neural network has framed those different steps as machine learning problems and the matching performance has been improved significantly. One of the main reasons why feature based image matching may still prove difficult is the complex change between different images, including geometric and radiometric transformations. If the change between images exceeds a certain level, it will also exceed the tolerance of those aforementioned separate steps and, in turn, cause feature based image matching to fail.
This thesis focuses on improving feature based image matching against large viewpoint and viewing direction change between images. In order to improve the feature based image matching performance under these circumstances, affine shape estimation, orientation and description are solved with deep learning architectures. In particular, Convolutional Neural Networks (CNN) are used. For the affine shape and orientation learning, the main contribution of this thesis is two fold. First, instead of a Siamese CNN, only one branch is needed and the loss is built based on the geometric measures calculated from the mean gradient or second moment matrix. Therefore, for each of the input patches, a global minimum, namely the canonical feature, exists. Second, both the affine shape and orientation are solved simultaneously within one network by combining the loss used for affine shape and orientation learning. To the best of the author’s knowledge, this is the first time these two modules are reported to have been successfully trained simultaneously. For the descriptor learning part, a new weak match is defined. For any input feature patch, a slightly transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features. In a following step, the found weak matches are used in the standard descriptor learning framework. In this way, the intra-variance of the appearance of matched feature patch pairs is explored in depth and, accordingly, the invariance of feature descriptors against viewpoint and viewing direction change is improved. The proposed feature based image matching method is evaluated on standard benchmarks and is used to solve for the parameters of image orientation. For the image orientation task, aerial oblique images are taken into account. Through analysis of the experiments conducted for small image blocks, it is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block connection.Note de contenu : 1- Introduction
2- Basics
3- Related work
4- Deep learning feature representation
5- Experiments and results
6- Discussion
7- Conclusion and outlookNuméro de notice : 17673 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD dissertation : Fachrichtung Geodäsie und Geoinformatik : Hanovre : 2021 En ligne : https://dgk.badw.de/fileadmin/user_upload/Files/DGK/docs/c-867.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97999 Improving image description with auxiliary modality for visual localization in challenging conditions / Nathan Piasco in International journal of computer vision, vol 29 n° 1 (January 2021)
PermalinkSemantic relatedness algorithm for keyword sets of geographic metadata / Zugang Chen in Cartography and Geographic Information Science, vol 47 n° 2 (February 2020)
PermalinkCartographie sémantique hybride de scènes urbaines à partir de données image et Lidar / Mohamed Boussaha (2020)
PermalinkPermalinkSUMAC'20 : Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia heritAge Contents / Valérie Gouet-Brunet (2020)
PermalinkChallenging deep image descriptors for retrieval in heterogeneous iconographic collections / Dimitri Gominski (2019)
PermalinkPermalinkCan a machine generate humanlike language descriptions for a remote sensing image? / Zhenwei Shi in IEEE Transactions on geoscience and remote sensing, vol 55 n° 6 (June 2017)
PermalinkPermalinkPermalink