Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > segmentation > segmentation sémantique
segmentation sémantiqueSynonyme(s)étiquetage sémantique étiquetage de données |
Documents disponibles dans cette catégorie (235)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
River ice segmentation with deep learning / Abhineet Singh in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
[article]
Titre : River ice segmentation with deep learning Type de document : Article/Communication Auteurs : Abhineet Singh, Auteur ; Hayden Kalke, Auteur ; Mark Loewen, Auteur Année de publication : 2020 Article en page(s) : pp 7570 - 7579 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] Canada
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] glace
[Termes IGN] image captée par drone
[Termes IGN] rivière
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantiqueRésumé : (auteur) This article deals with the problem of computing surface concentrations for two types of river ice from digital images acquired during freeze-up. It presents the results of attempting to solve this problem using several state-of-the-art semantic segmentation methods based on deep convolutional neural networks (CNNs). This task presents two main challenges—very limited availability of labeled training data and presence of noisy labels due to the great difficulty of visually distinguishing between the two types of ice, even for human experts. The results are used to analyze the extent to which some of the best deep learning methods currently in existence can handle these challenges. The code and data used in the experiments are made publicly available to facilitate further work in this domain. Numéro de notice : A2020-674 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2981082 Date de publication en ligne : 13/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2981082 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96165
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 11 (November 2020) . - pp 7570 - 7579[article]Streets of London: Using Flickr and OpenStreetMap to build an interactive image of the city / Azam Raha Bahrehdar in Computers, Environment and Urban Systems, vol 84 (November 2020)
[article]
Titre : Streets of London: Using Flickr and OpenStreetMap to build an interactive image of the city Type de document : Article/Communication Auteurs : Azam Raha Bahrehdar, Auteur ; Benjamin Adams, Auteur ; Ross S. Purves, Auteur Année de publication : 2020 Article en page(s) : n° 101524 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] autocorrélation spatiale
[Termes IGN] collecte de données
[Termes IGN] contenu généré par les utilisateurs
[Termes IGN] données localisées des bénévoles
[Termes IGN] exploration de données
[Termes IGN] image Flickr
[Termes IGN] Londres
[Termes IGN] mesure de similitude
[Termes IGN] métadonnées
[Termes IGN] OpenStreetMap
[Termes IGN] orthoimage géoréférencée
[Termes IGN] perception
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In his classic book “The Image of the City” Kevin Lynch used empirical work to show how different elements of the city were perceived: such as paths, landmarks, districts, edges, and nodes. Streets, by providing paths from which cities can be experienced, were argued to be one of the key elements of cities. Despite this long standing empirical basis, and the importance of Lynch's model in policy associated areas such as planning, work with user generated content has largely ignored these ideas. In this paper, we address this gap, using streets to aggregate filtered user generated content related to more than 1 million images and 60,000 individuals and explore similarity between more than 3000 streets in London across three dimensions: user behaviour, time and semantics. To perform our study we used two different sources of user generated content: (1) a collection of metadata attached to Flickr images and (2) street network of London from OpenStreetMap. We first explore global patterns in the distinctiveness and spatial autocorrelation of similarity using our three dimensions, establishing that the semantic and user dimensions in particular allow us to explore the city in different ways. We then used a Processing tool to interactively explore individual patterns of similarity across these four dimensions simultaneously, presenting results here for four selected and contrasting locations in London. Before drilling into the data to interpret in more detail, the identified patterns demonstrate that streets are natural units capturing perception of cities not only as paths but also through the emergence of other elements of the city proposed by Lynch including districts, landmarks and edges. Our approach also demonstrates how user generated content can be captured, allowing bottom-up perception from citizens to flow into a representation. Numéro de notice : A2020-710 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2020.101524 Date de publication en ligne : 05/08/2020 En ligne : https://doi.org/10.1016/j.compenvurbsys.2020.101524 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96255
in Computers, Environment and Urban Systems > vol 84 (November 2020) . - n° 101524[article]Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation / Huan Ning in Annals of GIS, vol 26 n° 4 (October 2020)
[article]
Titre : Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Cuizhen Wang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 329 - 342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contour
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] jeu de données
[Termes IGN] Kiangsi (Chine)
[Termes IGN] occupation du sol
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] taille du jeu de donnéesRésumé : (auteur) Land cover data is an inventory of objects on the Earth’s surface, which is often derived from remotely sensed imagery. Deep Convolutional Neural Network (DCNN) is a competitive method in image semantic segmentation. Some scholars argue that the inadequacy of training set is an obstacle when applying DCNNs in remote sensing image segmentation. While existing land cover data can be converted to large training sets, the size of training data set needs to be carefully considered. In this paper, we used different portions of a high-resolution land cover map to produce different sizes of training sets to train DCNNs (SegNet and U-Net) and then quantitatively evaluated the impact of training set size on the performance of the trained DCNN. We also introduced a new metric, Edge-ratio, to assess the performance of DCNN in maintaining the boundary of land cover objects. Based on the experiments, we document the relationship between the segmentation accuracy and the size of the training set, as well as the nonstationary accuracies among different land cover types. The findings of this paper can be used to effectively tailor the existing land cover data to training sets, and thus accelerate the assessment and employment of deep learning techniques for high-resolution land cover map extraction. Numéro de notice : A2020-800 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/19475683.2020.1803402 Date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/19475683.2020.1803402 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96723
in Annals of GIS > vol 26 n° 4 (October 2020) . - pp 329 - 342[article]Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution / Vitor Martins in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)
[article]
Titre : Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution Type de document : Article/Communication Auteurs : Vitor Martins, Auteur ; Amy L. Kaleita, Auteur ; Brian K. Gelder, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 56 - 73 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données multiéchelles
[Termes IGN] hétérogénéité environnementale
[Termes IGN] image à haute résolution
[Termes IGN] occupation du sol
[Termes IGN] reconnaissance d'objets
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] squelettisationRésumé : (auteur) Convolutional Neural Network (CNN) has been increasingly used for land cover mapping of remotely sensed imagery. However, large-area classification using traditional CNN is computationally expensive and produces coarse maps using a sliding window approach. To address this problem, object-based CNN (OCNN) becomes an alternative solution to improve classification performance. However, previous studies were mainly focused on urban areas or small scenes, and implementation of OCNN method is still needed for large-area classification over heterogeneous landscape. Additionally, the massive labeling of segmented objects requires a practical approach for less computation, including object analysis and multiple CNNs. This study presents a new multiscale OCNN (multi-OCNN) framework for large-scale land cover classification at 1-m resolution over 145,740 km2. Our approach consists of three main steps: (i) image segmentation, (ii) object analysis with skeleton-based algorithm, and (iii) application of multiple CNNs for final classification. Also, we developed a large benchmark dataset, called IowaNet, with 1 million labeled images and 10 classes. In our approach, multiscale CNNs were trained to capture the best contextual information during the semantic labeling of objects. Meanwhile, skeletonization algorithm provided morphological representation (“medial axis”) of objects to support the selection of convolutional locations for CNN predictions. In general, proposed multi-OCNN presented better classification accuracy (overall accuracy ~87.2%) compared to traditional patch-based CNN (81.6%) and fixed-input OCNN (82%). In addition, the results showed that this framework is 8.1 and 111.5 times faster than traditional pixel-wise CNN16 or CNN256, respectively. Multiple CNNs and object analysis have proved to be essential for accurate and fast classification. While multi-OCNN produced a high-level of spatial details in the land cover product, misclassification was observed for some classes, such as road versus buildings or shadow versus lake. Despite these minor drawbacks, our results also demonstrated the benefits of IowaNet training dataset in the model performance; overfitting process reduces as the number of samples increases. The limitations of multi-OCNN are partially explained by segmentation quality and limited number of spectral bands in the aerial data. With the advance of deep learning methods, this study supports the claim of multi-OCNN benefits for operational large-scale land cover product at 1-m resolution. Numéro de notice : A2020-634 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.08.004 Date de publication en ligne : 13/08/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.08.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96057
in ISPRS Journal of photogrammetry and remote sensing > vol 168 (October 2020) . - pp 56 - 73[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020101 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020103 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020102 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A novel deep learning instance segmentation model for automated marine oil spill detection / Shamsudeen Temitope Yekeen in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
[article]
Titre : A novel deep learning instance segmentation model for automated marine oil spill detection Type de document : Article/Communication Auteurs : Shamsudeen Temitope Yekeen, Auteur ; Abdul‐Lateef Balogun, Auteur ; Khamaruzaman B. Wan Yusof, Auteur Année de publication : 2020 Article en page(s) : pp 190 - 200 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hydrocarbure
[Termes IGN] image radar moirée
[Termes IGN] marée noire
[Termes IGN] segmentation sémantique
[Termes IGN] vision par ordinateur
[Termes IGN] zone d'intérêtRésumé : (auteur) The visual similarity of oil slick and other elements, known as look-alike, affects the reliability of synthetic aperture radar (SAR) images for marine oil spill detection. So far, detection and discrimination of oil spill and look-alike are still limited to the use of traditional machine learning algorithms and semantic segmentation deep learning models with limited accuracy. Thus, this study developed a novel deep learning oil spill detection model using computer vision instance segmentation Mask-Region-based Convolutional Neural Network (Mask R-CNN) model. The model training was conducted using transfer learning on the ResNet 101 on COCO as backbone in combination with Feature Pyramid Network (FPN) architecture for feature extraction at 30 epochs with 0.001 learning rate. Testing of the model was conducted using the least training and validation loss value on the withheld testing images. The model’s performance was evaluated using precision, recall, specificity, IoU, F1-measure and overall accuracy values. Ship detection and segmentation had the highest performance with overall accuracy of 98.3%. The model equally showed a higher accuracy for oil spill and look-alike detection and segmentation although oil spill detection outperformed look-alike with overall accuracy values of 96.6% and 91.0% respectively. The study concluded that the deep learning instance segmentation model performs better than conventional machine learning models and deep learning semantic segmentation models in detection and segmentation. Numéro de notice : A2020-548 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.07.011 Date de publication en ligne : 28/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.07.011 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95774
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 190 - 200[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Vehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkCan SPOT-6/7 CNN semantic segmentation improve Sentinel-2 based land cover products? sensor assessment and fusion / Olivier Stocker in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkCNN semantic segmentation to retrieve past land cover out of historical orthoimages and DSM: first experiments / Arnaud Le Bris in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkSubpixel-pixel-superpixel-based multiview active learning for hyperspectral images classification / Yu Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkThe image of subsurface geology / Ane Bang-Kittilsen in International journal of cartography, Vol 6 n° 2 (July 2020)PermalinkUnsupervised semantic and instance segmentation of forest point clouds / Di Wang in ISPRS Journal of photogrammetry and remote sensing, vol 165 (July 2020)PermalinkCounting of grapevine berries in images via semantic segmentation using convolutional neural networks / Laura Zabawa in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)PermalinkA hybrid deep learning–based model for automatic car extraction from high-resolution airborne imagery / Mehdi Khoshboresh Masouleh in Applied geomatics, vol 12 n° 2 (June 2020)PermalinkComparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding / Weihua Dong in Cartography and Geographic Information Science, vol 47 n° 3 (May 2020)PermalinkHow much do we learn from addresses? On the syntax, semantics and pragmatics of addressing systems / Ali Javidaneh in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)PermalinkDirectionally constrained fully convolutional neural network for airborne LiDAR point cloud classification / Congcong Wen in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkSea-land segmentation using deep learning techniques for Landsat-8 OLI imagery / Ting Yang in Marine geodesy, Vol 43 n° 2 (March 2020)PermalinkTree annotations in LiDAR data using point densities and convolutional neural networks / Ananya Gupta in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)PermalinkAnalyse, structuration et sémantisation des images aériennes [diaporama] / Valérie Gouet-Brunet (2020)PermalinkApplication of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)PermalinkCartographie sémantique hybride de scènes urbaines à partir de données image et Lidar / Mohamed Boussaha (2020)PermalinkConvolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts / Rodrigo Caye Daudt (2020)PermalinkPermalinkDeep learning for remote sensing images with open source software / Rémi Cresson (2020)PermalinkPermalink