Descripteur
Termes descripteurs IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > segmentation d'image > segmentation sémantique
segmentation sémantique |



Etendre la recherche sur niveau(x) vers le bas
LANet: Local attention embedding to improve the semantic segmentation of remote sensing images / Lei Ding in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
![]()
[article]
Titre : LANet: Local attention embedding to improve the semantic segmentation of remote sensing images Type de document : Article/Communication Auteurs : Lei Ding, Auteur ; Hao Tang, Auteur ; Lorenzo Bruzzone, Auteur Année de publication : 2021 Article en page(s) : pp 426 - 435 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] analyse de données
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] décodage
[Termes descripteurs IGN] distribution spatiale
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] segmentation sémantiqueRésumé : (auteur) The trade-off between feature representation power and spatial localization accuracy is crucial for the dense classification/semantic segmentation of remote sensing images (RSIs). High-level features extracted from the late layers of a neural network are rich in semantic information, yet have blurred spatial details; low-level features extracted from the early layers of a network contain more pixel-level information but are isolated and noisy. It is therefore difficult to bridge the gap between high- and low-level features due to their difference in terms of physical information content and spatial distribution. In this article, we contribute to solve this problem by enhancing the feature representation in two ways. On the one hand, a patch attention module (PAM) is proposed to enhance the embedding of context information based on a patchwise calculation of local attention. On the other hand, an attention embedding module (AEM) is proposed to enrich the semantic information of low-level features by embedding local focus from high-level features. Both proposed modules are lightweight and can be applied to process the extracted features of convolutional neural networks (CNNs). Experiments show that, by integrating the proposed modules into a baseline fully convolutional network (FCN), the resulting local attention network (LANet) greatly improves the performance over the baseline and outperforms other attention-based methods on two RSI data sets. Numéro de notice : A2021-035 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2994150 date de publication en ligne : 27/05/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2994150 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96737
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 1 (January 2021) . - pp 426 - 435[article]Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation / Huan Ning in Annals of GIS, vol 26 n° 4 (December 2020)
![]()
[article]
Titre : Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Cuizhen Wang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 329 - 342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] contour
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] Kiangsi (Chine)
[Termes descripteurs IGN] occupation du sol
[Termes descripteurs IGN] segmentation d'image
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] taille du jeu de donnéesRésumé : (auteur) Land cover data is an inventory of objects on the Earth’s surface, which is often derived from remotely sensed imagery. Deep Convolutional Neural Network (DCNN) is a competitive method in image semantic segmentation. Some scholars argue that the inadequacy of training set is an obstacle when applying DCNNs in remote sensing image segmentation. While existing land cover data can be converted to large training sets, the size of training data set needs to be carefully considered. In this paper, we used different portions of a high-resolution land cover map to produce different sizes of training sets to train DCNNs (SegNet and U-Net) and then quantitatively evaluated the impact of training set size on the performance of the trained DCNN. We also introduced a new metric, Edge-ratio, to assess the performance of DCNN in maintaining the boundary of land cover objects. Based on the experiments, we document the relationship between the segmentation accuracy and the size of the training set, as well as the nonstationary accuracies among different land cover types. The findings of this paper can be used to effectively tailor the existing land cover data to training sets, and thus accelerate the assessment and employment of deep learning techniques for high-resolution land cover map extraction. Numéro de notice : A2020-800 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/19475683.2020.1803402 date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/19475683.2020.1803402 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96723
in Annals of GIS > vol 26 n° 4 (December 2020) . - pp 329 - 342[article]Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks / Felix Schiefer in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
![]()
[article]
Titre : Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks Type de document : Article/Communication Auteurs : Felix Schiefer, Auteur ; Teja Kattenborn, Auteur ; Annett Frick, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 205-215 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] arbre (flore)
[Termes descripteurs IGN] carte forestière
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] espèce végétale
[Termes descripteurs IGN] Forêt-Noire, massif de la
[Termes descripteurs IGN] image à haute résolution
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] inventaire forestier (techniques et méthodes)
[Termes descripteurs IGN] inventaire forestier local
[Termes descripteurs IGN] segmentation sémantique
[Vedettes matières IGN] Inventaire forestierRésumé : (Auteur) The use of unmanned aerial vehicles (UAVs) in vegetation remote sensing allows a time-flexible and cost-effective acquisition of very high-resolution imagery. Still, current methods for the mapping of forest tree species do not exploit the respective, rich spatial information. Here, we assessed the potential of convolutional neural networks (CNNs) and very high-resolution RGB imagery from UAVs for the mapping of tree species in temperate forests. We used multicopter UAVs to obtain very high-resolution ( Numéro de notice : A2020-706 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.10.015 date de publication en ligne : 03/11/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.10.015 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96236
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 205-215[article]MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds / Haifeng Luo in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
![]()
[article]
Titre : MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds Type de document : Article/Communication Auteurs : Haifeng Luo, Auteur ; Chongcheng Chen, Auteur ; Lina Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8301 - 8315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] cognition
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] représentation multiple
[Termes descripteurs IGN] scène urbaine
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] semis de pointsRésumé : (auteur) Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet . Numéro de notice : A2020-738 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2985695 date de publication en ligne : 28/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2985695 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96363
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8301 - 8315[article]Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss / Xianwei Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
![]()
[article]
Titre : Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss Type de document : Article/Communication Auteurs : Xianwei Zheng, Auteur ; Linxi Huan, Auteur ; Gui-Song Xia, Auteur ; Jianya Gong, Auteur Année de publication : 2020 Article en page(s) : pp 15-28 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] classification basée sur les régions
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] contour
[Termes descripteurs IGN] image à très haute résolution
[Termes descripteurs IGN] méthode fondée sur le noyau
[Termes descripteurs IGN] scène urbaine
[Termes descripteurs IGN] segmentation sémantiqueRésumé : (Auteur) Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task in urban scene understanding. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address these issues uniformly, we propose a standalone end-to-end edge-aware neural network (EaNet) for urban scene semantic segmentation. For semantic consistency preservation inside objects, the EaNet model incorporates a large kernel pyramid pooling (LKPP) module to capture rich multi-scale context with strong continuous feature relations. To effectively separate confusing objects with sharp contours, a Dice-based edge-aware loss function (EA loss) is devised to guide the EaNet to refine both the pixel- and image-level edge information directly from semantic segmentation prediction. In the proposed EaNet model, the LKPP and the EA loss couple to enable comprehensive feature learning across an entire semantic object. Extensive experiments on three challenging datasets demonstrate that our method can be readily generalized to multi-scale ground/aerial urban scene images, achieving 81.7% in mIoU on Cityscapes Test set and 90.8% in the mean F1-score on the ISPRS Vaihingen 2D Test set. Code is available at: https://github.com/geovsion/EaNet. Numéro de notice : A2020-703 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.019 date de publication en ligne : 14/10/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96228
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 15-28[article]Semantic trajectory segmentation based on change-point detection and ontology / Yuan Gao in International journal of geographical information science IJGIS, vol 34 n° 12 (December 2020)
PermalinkUnsupervised deep joint segmentation of multitemporal high-resolution images / Sudipan Saha in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
PermalinkActive and incremental learning for semantic ALS point cloud segmentation / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)
PermalinkRiver ice segmentation with deep learning / Abhineet Singh in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
PermalinkStreets of London: Using Flickr and OpenStreetMap to build an interactive image of the city / Azam Raha Bahrehdar in Computers, Environment and Urban Systems, vol 84 (November 2020)
PermalinkA novel deep learning instance segmentation model for automated marine oil spill detection / Shamsudeen Temitope Yekeen in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
PermalinkCan SPOT-6/7 CNN semantic segmentation improve Sentinel-2 based land cover products? sensor assessment and fusion / Olivier Stocker in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkCNN semantic segmentation to retrieve past land cover out of historical orthoimages and DSM: first experiments / Arnaud Le Bris in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2 (August 2020)
PermalinkUnsupervised semantic and instance segmentation of forest point clouds / Di Wang in ISPRS Journal of photogrammetry and remote sensing, vol 165 (July 2020)
Permalink