Descripteur
Termes descripteurs IGN > sciences naturelles > physique > traitement d'image > accentuation d'image > amélioration du contraste
amélioration du contraste |



Etendre la recherche sur niveau(x) vers le bas
FuNet: A novel road extraction network with fusion of location data and remote sensing imagery / Kai Zhou in ISPRS International journal of geo-information, vol 10 n° 1 (January 2021)
![]()
[article]
Titre : FuNet: A novel road extraction network with fusion of location data and remote sensing imagery Type de document : Article/Communication Auteurs : Kai Zhou, Auteur ; Yan Xie, Auteur ; Zhan Gao, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 10 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] amélioration du contraste
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] connexité (topologie)
[Termes descripteurs IGN] extraction du réseau routier
[Termes descripteurs IGN] fusion d'images
[Termes descripteurs IGN] itération
[Termes descripteurs IGN] Pékin (Chine)
[Termes descripteurs IGN] segmentation sémantiqueRésumé : (auteur) Road semantic segmentation is unique and difficult. Road extraction from remote sensing imagery often produce fragmented road segments leading to road network disconnection due to the occlusion of trees, buildings, shadows, cloud, etc. In this paper, we propose a novel fusion network (FuNet) with fusion of remote sensing imagery and location data, which plays an important role of location data in road connectivity reasoning. A universal iteration reinforcement (IteR) module is embedded into FuNet to enhance the ability of network learning. We designed the IteR formula to repeatedly integrate original information and prediction information and designed the reinforcement loss function to control the accuracy of road prediction output. Another contribution of this paper is the use of histogram equalization data pre-processing to enhance image contrast and improve the accuracy by nearly 1%. We take the excellent D-LinkNet as the backbone network, designing experiments based on the open dataset. The experiment result shows that our method improves over the compared advanced road extraction methods, which not only increases the accuracy of road extraction, but also improves the road topological connectivity. Numéro de notice : A2021-147 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10010039 date de publication en ligne : 19/01/2021 En ligne : https://doi.org/10.3390/ijgi10010039 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97055
in ISPRS International journal of geo-information > vol 10 n° 1 (January 2021) . - n° 10[article]A novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)
![]()
[article]
Titre : A novel deep network and aggregation model for saliency detection Type de document : Article/Communication Auteurs : Ye Liang, Auteur ; Hongzhe Liu, Auteur ; Nan Ma, Auteur Année de publication : 2020 Article en page(s) : pp 1883 - 1895 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] architecture de réseau
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] saillanceRésumé : (auteur) Recent deep learning-based methods for saliency detection have proved the effectiveness of integrating features with different scales. They usually design various complex architectures of network, e.g., multiple networks, to explore the multi-scale information of images, which is expensive in computation and memory. Feature maps produced with different subsampling convolutional layers have different spatial resolutions; therefore, they can be used as the multi-scale features to reduce the costs. In this paper, by exploiting the in-network feature hierarchy of convolutional networks, we propose a novel multi-scale network for saliency detection (MSNSD) consisting of three modules, i.e., bottom-up feature extraction, top-down feature connection and multi-scale saliency prediction. Moreover, to further boost the performance of MSNSD, an input image-aware saliency aggregation method is proposed based on the ridge regression, which combines MSNSD with some well-performed handcrafted shallow models. Extensive experiments on several benchmarks show that the proposed MSNSD outperforms the state-of-the-art saliency methods with less computational and memory complexity. Meanwhile, our aggregation method for saliency detection is effective and efficient to combine deep and shallow models and make them complementary to each other. Numéro de notice : A2020-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-019-01781-9 date de publication en ligne : 09/12/2019 En ligne : https://doi.org/10.1007/s00371-019-01781-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95952
in The Visual Computer > vol 36 n° 9 (September 2020) . - pp 1883 - 1895[article]Conditional random field and deep feature learning for hyperspectral image classification / Fahim Irfan Alam in IEEE Transactions on geoscience and remote sensing, vol 57 n° 3 (March 2019)
![]()
[article]
Titre : Conditional random field and deep feature learning for hyperspectral image classification Type de document : Article/Communication Auteurs : Fahim Irfan Alam, Auteur ; Jun Zhou, Auteur ; Alan Wee-Chung Liew, Auteur ; Xiuping Jia, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 1612 - 1628 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] analyse multibande
[Termes descripteurs IGN] champ aléatoire conditionnel
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] voxelRésumé : (Auteur) Image classification is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, a convolutional neural network (CNN) has established itself as a powerful model in classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the classification performance. In this paper, we propose a method to classify hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral band groups to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of 3-D data cubes. Furthermore, we introduce a deep deconvolution network that improves the final classification performance. We also introduced a new data set and experimented our proposed method on it along with several widely adopted benchmark data sets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method. Numéro de notice : A2019-131 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2018.2867679 date de publication en ligne : 20/09/2018 En ligne : https://doi.org/10.1109/TGRS.2018.2867679 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92461
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 3 (March 2019) . - pp 1612 - 1628[article]Object-based superresolution land-cover mapping from remotely sensed imagery / Yuehong Chen in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)
![]()
[article]
Titre : Object-based superresolution land-cover mapping from remotely sensed imagery Type de document : Article/Communication Auteurs : Yuehong Chen, Auteur ; Yong Ge, Auteur ; Gerard B.M. Heuvelink, Auteur ; Ru An, Auteur ; Yu Chen, Auteur Année de publication : 2018 Article en page(s) : pp 328 - 340 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] classification orientée objet
[Termes descripteurs IGN] classification pixellaire
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] krigeage
[Termes descripteurs IGN] occupation du sol
[Termes descripteurs IGN] programmation linéaire
[Termes descripteurs IGN] variogrammeRésumé : (Auteur) Superresolution mapping (SRM) is a widely used technique to address the mixed pixel problem in pixel-based classification. Advanced object-based classification will face a similar mixed phenomenon-a mixed object that contains different land-cover classes. Currently, most SRM approaches focus on estimating the spatial location of classes within mixed pixels in pixel-based classification. Little if any consideration has been given to predicting where classes spatially distribute within mixed objects. This paper, therefore, proposes a new object-based SRM strategy (OSRM) to deal with mixed objects in object-based classification. First, it uses the deconvolution technique to estimate the semivariograms at target subpixel scale from the class proportions of irregular objects. Then, an area-to-point kriging method is applied to predict the soft class values of subpixels within each object according to the estimated semivariograms and the class proportions of objects. Finally, a linear optimization model at object level is built to determine the optimal class labels of subpixels within each object. Two synthetic images and a real remote sensing image were used to evaluate the performance of OSRM. The experimental results demonstrated that OSRM generated more land-cover details within mixed objects than did the traditional object-based hard classification and performed better than an existing pixel-based SRM method. Hence, OSRM provides a valuable solution to mixed objects in object-based classification. Numéro de notice : A2018-186 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2747624 date de publication en ligne : 20/09/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2747624 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89843
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 1 (January 2018) . - pp 328 - 340[article]Unsupervised-restricted deconvolutional neural network for very high resolution remote-sensing image classification / Yiting Tao in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
![]()
[article]
Titre : Unsupervised-restricted deconvolutional neural network for very high resolution remote-sensing image classification Type de document : Article/Communication Auteurs : Yiting Tao, Auteur ; Miaozhong Xu, Auteur ; Fan Zhang, Auteur ; Bo Du, Auteur ; Liangpei Zhang, Auteur Année de publication : 2017 Article en page(s) : pp 6805 - 6823 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage non-dirigé
[Termes descripteurs IGN] classification pixellaire
[Termes descripteurs IGN] déconvolution
[Termes descripteurs IGN] image Geoeye
[Termes descripteurs IGN] image Quickbird
[Termes descripteurs IGN] méthode fondée sur le noyau
[Termes descripteurs IGN] réseau neuronal convolutifRésumé : (Auteur) As the acquisition of very high resolution (VHR) satellite images becomes easier owing to technological advancements, ever more stringent requirements are being imposed on automatic image interpretation. Moreover, per-pixel classification has become the focus of research interests in this regard. However, the efficient and effective processing and the interpretation of VHR satellite images remain a critical task. Convolutional neural networks (CNNs) have recently been applied to VHR satellite images with considerable success. However, the prevalent CNN models accept input data of fixed sizes and train the classifier using features extracted directly from the convolutional stages or the fully connected layers, which cannot yield pixel-to-pixel classifications. Moreover, training a CNN model requires large amounts of labeled reference data. These are challenging to obtain because per-pixel labeled VHR satellite images are not open access. In this paper, we propose a framework called the unsupervised-restricted deconvolutional neural network (URDNN). It can solve these problems by learning an end-to-end and pixel-to-pixel classification and handling a VHR classification using a fully convolutional network and a small number of labeled pixels. In URDNN, supervised learning is always under the restriction of unsupervised learning, which serves to constrain and aid supervised training in learning more generalized and abstract feature. To some degree, it will try to reduce the problems of overfitting and undertraining, which arise from the scarcity of labeled training data, and to gain better classification results using fewer training samples. It improves the generality of the classification model. We tested the proposed URDNN on images from the Geoeye and Quickbird sensors and obtained satisfactory results with the highest overall accuracy (OA) achieved as 0.977 and 0.989, respectively. Experiments showed that the combined effects of additional kernels and stages may have produced better results, and two-stage URDNN consistently produced a more stable result. We compared URDNN with four methods and found that with a small ratio of selected labeled data items, it yielded the highest and most stable results, whereas the accuracy values of the other methods quickly decreased. For some categories with fewer training pixels, accuracy for categories from other methods was considerably worse than that in URDNN, with the largest difference reaching almost 10%. Hence, the proposed URDNN can successfully handle the VHR image classification using a small number of labeled pixels. Furthermore, it is more effective than state-of-the-art methods. Numéro de notice : A2017-766 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2734697 En ligne : https://doi.org/10.1109/TGRS.2017.2734697 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=88803
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 12 (December 2017) . - pp 6805 - 6823[article]Sparse bayesian learning-based time-variant deconvolution / Sanyi Yuan in IEEE Transactions on geoscience and remote sensing, vol 55 n° 11 (November 2017)
PermalinkRemote sensing scene classification by unsupervised representation learning / Xiaoqiang Lu in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)
PermalinkGold – A novel deconvolution algorithm with optimization for waveform LiDAR processing / Tan Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 129 (July 2017)
PermalinkDescribing contrast across scales / Sohaib Ali Syed in ISPRS Journal of photogrammetry and remote sensing, vol 128 (June 2017)
PermalinkEnhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency / Changjiang Liu in ISPRS Journal of photogrammetry and remote sensing, vol 128 (June 2017)
PermalinkPermalinkTélédétection pour l'observation des surfaces continentales, ch. 6. Méthodes de traitement de données lidar / Clément Mallet (2017)
PermalinkAn inquiry on contrast enhancement methods for satellite images / Jose-Luis Lisani in IEEE Transactions on geoscience and remote sensing, vol 54 n° 12 (December 2016)
PermalinkAn iterative interpolation deconvolution algorithm for superresolution land cover mapping / Feng Ling in IEEE Transactions on geoscience and remote sensing, vol 54 n° 12 (December 2016)
PermalinkA synchronization algorithm for spaceborne/stationary BiSAR imaging based on contrast optimization with direct signal from radar satellite / M. Zhang in IEEE Transactions on geoscience and remote sensing, vol 54 n° 4 (April 2016)
Permalink