Descripteur
Documents disponibles dans cette catégorie (1366)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Cloud detection from paired CrIS water vapor and CO₂ channels using machine learning techniques / Miao Tian in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Cloud detection from paired CrIS water vapor and CO₂ channels using machine learning techniques Type de document : Article/Communication Auteurs : Miao Tian, Auteur ; Hao Chen, Auteur ; Guanghui Liu, Auteur Année de publication : 2021 Article en page(s) : pp 2781 - 2793 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection des nuages
[Termes IGN] dioxyde de carbone
[Termes IGN] image infrarouge
[Termes IGN] modèle atmosphérique
[Termes IGN] modèle de transfert radiatif
[Termes IGN] régression linéaire
[Termes IGN] vapeur d'eauRésumé : (auteur) Accurate cloud detection using infrared (IR) data is very challenging due to the limitations and uncertainties from many aspects in the satellite IR remote sensing. This article proposes an end-to-end cloud detection method for the Cross-track IR Sounder (CrIS) using machine learning (ML) techniques. The brightness temperatures from paired CrIS channels in the longwave and midwave water vapor bands and the longwave and shortwave CO 2 bands are used. After obtaining the linear regression coefficients for each of the selected channel pairs, a complete set of CrIS full spectral resolution (FSR) cloud detection index (FCDI) is derived from the temperature difference between the regression and observation for each channel pair. It is shown that FCDI captures cloud location and structure well by comparing with the cloud products (CPs) from the Visible IR Imaging Radiometer Suite (VIIRS). After collocating FCDI with VIIRS CP, ML techniques such as the extreme learning machine, support vector machine, and multilayer perceptron are used to train the collocated FCDIs for cloud detection. Simulation results show that the accuracy of FCDI cloud detection is slightly above 80%. Moreover, the results encourage the use of water vapor bands in FCDI, in addition to CO 2 bands. Numéro de notice : A2021-281 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3020120 Date de publication en ligne : 18/12/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3020120 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97387
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 2781 - 2793[article]A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery / Seth Goodman in Transactions in GIS, Vol 25 n° 2 (April 2021)
[article]
Titre : A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery Type de document : Article/Communication Auteurs : Seth Goodman, Auteur ; Ariel BenYishay, Auteur ; Daniel Runfola, Auteur Année de publication : 2021 Article en page(s) : pp 674 - 691 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] conflit
[Termes IGN] image Landsat-8
[Termes IGN] implémentation (informatique)
[Termes IGN] Nigéria
[Termes IGN] prédiction
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Convolutional neural networks (CNNs) trained with satellite imagery have been successfully used to generate measures of development indicators, such as poverty, in developing nations. This article explores a CNN‐based approach leveraging Landsat 8 imagery to predict locations of conflict‐related deaths. Using Nigeria as a case study, we use the Armed Conflict Location & Event Data (ACLED) dataset to identify locations of conflict events that did or did not result in a death. Imagery for each location is used as an input to train a CNN to distinguish fatal from non‐fatal events. Using 2014 imagery, we are able to predict the result of conflict events in the following year (2015) with 80% accuracy. While our approach does not replace the need for causal studies into the drivers of conflict death, it provides a low‐cost solution to prediction that requires only publicly available imagery to implement. Findings suggest that the information contained in moderate‐resolution imagery can be used to predict the likelihood of a death due to conflict at a given location in Nigeria the following year, and that CNN‐based methods of estimating development‐related indicators may be effective in applications beyond those explored in the literature. Numéro de notice : A2021-361 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12661 Date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1111/tgis.12661 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97625
in Transactions in GIS > Vol 25 n° 2 (April 2021) . - pp 674 - 691[article]Hyperspectral image denoising via clustering-based latent variable in variational Bayesian framework / Peyman Azimpour in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Hyperspectral image denoising via clustering-based latent variable in variational Bayesian framework Type de document : Article/Communication Auteurs : Peyman Azimpour, Auteur ; Tahereh Bahraini, Auteur ; Hadi Sadoghi Yazdi, Auteur Année de publication : 2021 Article en page(s) : pp 3266 - 3276 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] classification bayesienne
[Termes IGN] classification floue
[Termes IGN] distribution de Gauss
[Termes IGN] factorisation de matrice non-négative
[Termes IGN] filtrage du bruit
[Termes IGN] filtre de Gauss
[Termes IGN] image hyperspectrale
[Termes IGN] Matlab
[Termes IGN] processeur graphique
[Termes IGN] qualité des données
[Termes IGN] variableRésumé : (auteur) The hyperspectral-image (HSI) noise-reduction step is a very significant preprocessing phase of data-quality enhancement. It has been attracting immense research attention in the remote sensing and image processing domains. Many methods have been developed for HSI restoration, the goal of which is to remove noise from the whole HSI cube simultaneously without considering the spectral–spatial similarity. When a noise-removal algorithm is used globally to the entire data set, it would not eliminate all levels of noise, effectively. Furthermore, most of the existing methods remove independent and identically distributed (i.i.d.) Gaussian noise. The real scenarios are much more complicated than this assumption. The complexity created by natural noise that has a non-i.i.d. structure leads to inefficient methods containing underestimation and invalid performance. In this article, we calculated the spatial–spectral similarity criteria by defining a set of clustering-based latent variables (CLVs) in a Bayesian framework to improve the robustness. These criteria can be extracted using the clustering operators. Then, by applying the CLV to the variational Bayesian model, we investigated a new low-rank matrix factorization denoising approach based on the proposed clustering-based latent variable (CLV-LRMF) to remove noise with the non-i.i.d. mixture of Gaussian structures. Finally, we switched to the GPU for MATLAB implementation to reduce the runtime. The experimental results show that the performance has been improved by applying the proposed CLV and demonstrate the effectiveness of the proposed CLV-LRMF over other state-of-the-art methods. Numéro de notice : A2021-287 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2939512 Date de publication en ligne : 24/03/2021 En ligne : https://doi.org/10.1109/TGRS.2019.2939512 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97396
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3266 - 3276[article]Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss Type de document : Article/Communication Auteurs : Ruoqiao Jiang, Auteur ; Shaohui Mei, Auteur ; Mingyang Ma, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 3326 - 3337 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] échantillon
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] invariant
[Termes IGN] réseau neuronal siamois
[Termes IGN] rotationRésumé : (auteur) Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available. Numéro de notice : A2021-286 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3021283 Date de publication en ligne : 18/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3021283 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97395
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3326 - 3337[article]Scene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier / Qimin Cheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)
[article]
Titre : Scene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier Type de document : Article/Communication Auteurs : Qimin Cheng, Auteur ; Yuan Xu, Auteur ; Peng Fu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 295-308 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] scèneRésumé : (Auteur) Deep learning techniques, especially convolutional neural networks, have boosted performance in analyzing and understanding remotely sensed images to a great extent. However, existing scene-classification methods generally neglect local and spatial information that is vital to scene classification of remotely sensed images. In this study, a method of scene classification for remotely sensed images based on pretrained densely connected convolutional neural networks combined with an ensemble classifier is proposed to tackle the under-utilization of local and spatial information for image classification. Specifically, we first exploit the pretrained DenseNet and fine-tuned it to release its potential in remote-sensing image feature representation. Second, a spatial-pyramid structure and an improved Fisher-vector coding strategy are leveraged to further strengthen representation capability and the robustness of the feature map captured from convolutional layers. Then we integrate an ensemble classifier in our network architecture considering that lower attention to feature descriptors. Extensive experiments are conducted, and the proposed method achieves superior performance on UC Merced, AID, and NWPU-RESISC45 data sets. Numéro de notice : A2021-334 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.3.295 Date de publication en ligne : 01/04/2021 En ligne : https://doi.org/10.14358/PERS.87.3.295 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97533
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 4 (April 2021) . - pp 295-308[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021041 SL Revue Centre de documentation Revues en salle Disponible A shape transformation-based dataset augmentation framework for pedestrian detection / Zhe Chen in International journal of computer vision, vol 129 n° 4 (April 2021)PermalinkUnsupervised pansharpening based on self-attention mechanism / Ying Qu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkSRP, une base de calage 3D de très haute précision sur le continent africain / Laure Chandelier in Revue Française de Photogrammétrie et de Télédétection, n° 223 (mars - décembre 2021)PermalinkDetection of subpixel targets on hyperspectral remote sensing imagery based on background endmember extraction / Xiaorui Song in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkDynamic human body reconstruction and motion tracking with low-cost depth cameras / Kangkan Wang in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkFeature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)PermalinkImpact of atmospheric correction on spatial heterogeneity relations between land surface temperature and biophysical compositions / Xin-Ming Zhu in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkLearning from GPS trajectories of floating car for CNN-based urban road extraction with high-resolution satellite imagery / Ju Zhang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPan-sharpening via multiscale dynamic convolutional neural network / Jianwen Hu in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkRecognition of varying size scene images using semantic analysis of deep activation maps / Shikha Gupta in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkUsing geometric constraints to improve performance of image classifiers for automatic segmentation of traffic signs / Roholah Yazdan in Geomatica, vol 75 n° 1 (Mars 2021)PermalinkActivity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)PermalinkCorrentropy-based spatial-spectral robust sparsity-regularized hyperspectral unmixing / Xiaorun Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 2 (February 2021)PermalinkDeep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)PermalinkFully convolutional neural network for impervious surface segmentation in mixed urban environment / Joseph McGlinchy in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 2 (February 2021)PermalinkGTP-PNet: A residual learning network based on gradient transformation prior for pansharpening / Hao Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 172 (February 2021)PermalinkSemi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)Permalink