Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (579)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
A novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)
![]()
[article]
Titre : A novel deep network and aggregation model for saliency detection Type de document : Article/Communication Auteurs : Ye Liang, Auteur ; Hongzhe Liu, Auteur ; Nan Ma, Auteur Année de publication : 2020 Article en page(s) : pp 1883 - 1895 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] architecture de réseau
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déconvolution
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] saillanceRésumé : (auteur) Recent deep learning-based methods for saliency detection have proved the effectiveness of integrating features with different scales. They usually design various complex architectures of network, e.g., multiple networks, to explore the multi-scale information of images, which is expensive in computation and memory. Feature maps produced with different subsampling convolutional layers have different spatial resolutions; therefore, they can be used as the multi-scale features to reduce the costs. In this paper, by exploiting the in-network feature hierarchy of convolutional networks, we propose a novel multi-scale network for saliency detection (MSNSD) consisting of three modules, i.e., bottom-up feature extraction, top-down feature connection and multi-scale saliency prediction. Moreover, to further boost the performance of MSNSD, an input image-aware saliency aggregation method is proposed based on the ridge regression, which combines MSNSD with some well-performed handcrafted shallow models. Extensive experiments on several benchmarks show that the proposed MSNSD outperforms the state-of-the-art saliency methods with less computational and memory complexity. Meanwhile, our aggregation method for saliency detection is effective and efficient to combine deep and shallow models and make them complementary to each other. Numéro de notice : A2020-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-019-01781-9 Date de publication en ligne : 09/12/2019 En ligne : https://doi.org/10.1007/s00371-019-01781-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95952
in The Visual Computer > vol 36 n° 9 (September 2020) . - pp 1883 - 1895[article]Recognition of building group patterns using graph convolutional network / Rong Zhao in Cartography and Geographic Information Science, Vol 47 n° 5 (September 2020)
![]()
[article]
Titre : Recognition of building group patterns using graph convolutional network Type de document : Article/Communication Auteurs : Rong Zhao, Auteur ; Tinghua Ai, Auteur ; Wenhao Yu, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 400 - 417 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données topographiques
[Termes IGN] espace urbain
[Termes IGN] généralisation du bâti
[Termes IGN] graphe
[Termes IGN] modélisation du bâti
[Termes IGN] reconnaissance de formesRésumé : (auteur) Recognition of building group patterns is of great significance for understanding and modeling the urban space. However, many current methods cannot fully utilize spatial information and have trouble efficiently dealing with topographic data with high complexity. The design of intelligent computational models that can act directly on topographic data to extract spatial features is critical. To this end, we propose a novel deep neural network based on graph convolutions to automatically identify building group patterns with arbitrary forms. The method first models buildings by a general graph, and then the neural network simultaneously learns the structural information as well as vertex attributes to classify building objects. We apply this method to real building data, and the experimental results show that the proposed method can effectively capture spatial information to make more accurate predictions than traditional methods. Numéro de notice : A2020-510 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/15230406.2020.1757512 Date de publication en ligne : 12/06/2020 En ligne : https://doi.org/10.1080/15230406.2020.1757512 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95663
in Cartography and Geographic Information Science > Vol 47 n° 5 (September 2020) . - pp 400 - 417[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 032-2020051 RAB Revue Centre de documentation En réserve L003 Disponible Using OpenStreetMap data and machine learning to generate socio-economic indicators / Daniel Feldmeyer in ISPRS International journal of geo-information, vol 9 n° 9 (September 2020)
![]()
[article]
Titre : Using OpenStreetMap data and machine learning to generate socio-economic indicators Type de document : Article/Communication Auteurs : Daniel Feldmeyer, Auteur ; Claude Meisch, Auteur ; Holger Sauter, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : 16 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] Allemagne
[Termes IGN] apprentissage automatique
[Termes IGN] arbre aléatoire
[Termes IGN] base de données spatiotemporelles
[Termes IGN] changement climatique
[Termes IGN] chômage
[Termes IGN] classification par réseau neuronal
[Termes IGN] collectivité territoriale
[Termes IGN] données localisées des bénévoles
[Termes IGN] données socio-économiques
[Termes IGN] inégalité
[Termes IGN] limite administrative
[Termes IGN] modèle de régression
[Termes IGN] modèle de simulation
[Termes IGN] OpenStreetMapRésumé : (auteur) Socio-economic indicators are key to understanding societal challenges. They disassemble complex phenomena to gain insights and deepen understanding. Specific subsets of indicators have been developed to describe sustainability, human development, vulnerability, risk, resilience and climate change adaptation. Nonetheless, insufficient quality and availability of data often limit their explanatory power. Spatial and temporal resolution are often not at a scale appropriate for monitoring. Socio-economic indicators are mostly provided by governmental institutions and are therefore limited to administrative boundaries. Furthermore, different methodological computation approaches for the same indicator impair comparability between countries and regions. OpenStreetMap (OSM) provides an unparalleled standardized global database with a high spatiotemporal resolution. Surprisingly, the potential of OSM seems largely unexplored in this context. In this study, we used machine learning to predict four exemplary socio-economic indicators for municipalities based on OSM. By comparing the predictive power of neural networks to statistical regression models, we evaluated the unhinged resources of OSM for indicator development. OSM provides prospects for monitoring across administrative boundaries, interdisciplinary topics, and semi-quantitative factors like social cohesion. Further research is still required to, for example, determine the impact of regional and international differences in user contributions on the outputs. Nonetheless, this database can provide meaningful insight into otherwise unknown spatial differences in social, environmental or economic inequalities. Numéro de notice : A2020-663 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9090498 Date de publication en ligne : 21/08/2020 En ligne : https://doi.org/10.3390/ijgi9090498 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96139
in ISPRS International journal of geo-information > vol 9 n° 9 (September 2020) . - 16 p.[article]Vehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
![]()
[article]
Titre : Vehicle detection of multi-source remote sensing data using active fine-tuning network Type de document : Article/Communication Auteurs : Xin Wu, Auteur ; Wei Li, Auteur ; Danfeng Hong, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 39 - 53 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] Allemagne
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données multisources
[Termes IGN] image aérienne
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] segmentation
[Termes IGN] segmentation sémantique
[Termes IGN] véhiculeRésumé : (auteur) Vehicle detection in remote sensing images has attracted increasing interest in recent years. However, its detection ability is limited due to lack of well-annotated samples, especially in densely crowded scenes. Furthermore, since a list of remotely sensed data sources is available, efficient exploitation of useful information from multi-source data for better vehicle detection is challenging. To solve the above issues, a multi-source active fine-tuning vehicle detection (Ms-AFt) framework is proposed, which integrates transfer learning, segmentation, and active classification into a unified framework for auto-labeling and detection. The proposed Ms-AFt employs a fine-tuning network to firstly generate a vehicle training set from an unlabeled dataset. To cope with the diversity of vehicle categories, a multi-source based segmentation branch is then designed to construct additional candidate object sets. The separation of high quality vehicles is realized by a designed attentive classifications network. Finally, all three branches are combined to achieve vehicle detection. Extensive experimental results conducted on two open ISPRS benchmark datasets, namely the Vaihingen village and Potsdam city datasets, demonstrate the superiority and effectiveness of the proposed Ms-AFt for vehicle detection. In addition, the generalization ability of Ms-AFt in dense remote sensing scenes is further verified on stereo aerial imagery of a large camping site. Numéro de notice : A2020-546 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.06.016 Date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.06.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95772
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 39 - 53[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data / Danfeng Hong in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
![]()
[article]
Titre : X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data Type de document : Article/Communication Auteurs : Danfeng Hong, Auteur ; Naoto Yokoya, Auteur ; Gui-Song Sia, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 12 - 23 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] bruit blanc
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] scène urbaine
[Termes IGN] transmission de donnéesRésumé : (auteur) This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Numéro de notice : A2020-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.06.014 Date de publication en ligne : 11/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.06.014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95770
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 12 - 23[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt CNN semantic segmentation to retrieve past land cover out of historical orthoimages and DSM: first experiments / Arnaud Le Bris in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)
PermalinkLanduse and land cover identification and disaggregating socio-economic data with convolutional neural network / Jingtao Yao in Geocarto international, vol 35 n° 10 ([01/08/2020])
PermalinkClassification of hyperspectral and LiDAR data using coupled CNNs / Renlong Hang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)
PermalinkClassification of sea ice types in Sentinel-1 SAR data using convolutional neural networks / Hugo Boulze in Remote sensing, vol 12 n° 13 (July-1 2020)
PermalinkEvaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches / S.M. Hamylton in International journal of applied Earth observation and geoinformation, vol 89 (July 2020)
PermalinkSimulating urban land use change by integrating a convolutional neural network with vector-based cellular automata / Yaqian Zhai in International journal of geographical information science IJGIS, vol 34 n° 7 (July 2020)
PermalinkCounting of grapevine berries in images via semantic segmentation using convolutional neural networks / Laura Zabawa in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)
PermalinkFine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data / Shivangi Srivastava in International journal of geographical information science IJGIS, vol 34 n° 6 (June 2020)
PermalinkGeoNat v1.0: A dataset for natural feature mapping with artificial intelligence and supervised learning / Samantha T. Arundel in Transactions in GIS, Vol 24 n° 3 (June 2020)
PermalinkA hybrid deep learning–based model for automatic car extraction from high-resolution airborne imagery / Mehdi Khoshboresh Masouleh in Applied geomatics, vol 12 n° 2 (June 2020)
Permalink