Descripteur
Documents disponibles dans cette catégorie (2285)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Integration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model / Nadeem Fareed in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)
![]()
[article]
Titre : Integration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model Type de document : Article/Communication Auteurs : Nadeem Fareed, Auteur ; Khushbakht Rehman, Auteur Année de publication : 2020 Article en page(s) : 26 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] agriculture de précision
[Termes IGN] données GNSS
[Termes IGN] données lidar
[Termes IGN] extraction automatique
[Termes IGN] extraction de la végétation
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] modèle dynamique
[Termes IGN] modèle numérique de surface
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] système d'information géographique
[Termes IGN] télédétectionRésumé : (auteur) Automated feature extraction from drone-based image point clouds (DIPC) is of paramount importance in precision agriculture (PA). PA is blessed with mechanized row seedlings to attain maximum yield and best management practices. Therefore, automated plantation rows extraction is essential in crop harvesting, pest management, and plant grow-rate predictions. Most of the existing research is consists on red, green, and blue (RGB) image-based solutions to extract plantation rows with the minimal background noise of test study sites. DIPC-based DSM row extraction solutions have not been tested frequently. In this research work, an automated method is designed to extract plantation row from DIPC-based DSM. The chosen plantation compartments have three different levels of background noise in UAVs images, therefore, methodology was tested under different background noises. The extraction results were quantified in terms of completeness, correctness, quality, and F1-score values. The case study revealed the potential of DIPC-based solution to extraction the plantation rows with an F1-score value of 0.94 for a plantation compartment with minimal background noises, 0.91 value for a highly noised compartment, and 0.85 for a compartment where DIPC was compromised. The evaluation suggests that DSM-based solutions are robust as compared to RGB image-based solutions to extract plantation-rows. Additionally, DSM-based solutions can be further extended to assess the plantation rows surface deformation caused by humans and machines and state-of-the-art is redefined. Numéro de notice : A2020-260 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9030151 Date de publication en ligne : 06/03/2020 En ligne : https://doi.org/10.3390/ijgi9030151 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95020
in ISPRS International journal of geo-information > vol 9 n° 3 (March 2020) . - 26 p.[article]Reducing shadow effects on the co-registration of aerial image pairs / Matthew Plummer in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)
![]()
[article]
Titre : Reducing shadow effects on the co-registration of aerial image pairs Type de document : Article/Communication Auteurs : Matthew Plummer, Auteur ; Douglas A. Stow, Auteur ; Emmanuel Storey, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 177 - 186 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de données
[Termes IGN] correction des ombres
[Termes IGN] détection automatique
[Termes IGN] détection de changement
[Termes IGN] effet d'ombre
[Termes IGN] enregistrement de données
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] image multitemporelle
[Termes IGN] intensité lumineuse
[Termes IGN] masque
[Termes IGN] Ransac (algorithme)
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Image registration is an important preprocessing step prior to detecting changes using multi-temporal image data, which is increasingly accomplished using automated methods. In high spatial resolution imagery, shadows represent a major source of illumination variation, which can reduce the performance of automated registration routines. This study evaluates the statistical relationship between shadow presence and image registration accuracy, and whether masking and normalizing shadows leads to improved automatic registration results. Eighty-eight bitemporal aerial image pairs were co-registered using software called Scale Invariant Features Transform (SIFT) and Random Sample Consensus (RANSAC) Alignment (SARA). Co-registration accuracy was assessed at different levels of shadow coverage and shadow movement within the images. The primary outcomes of this study are (1) the amount of shadow in a multi-temporal image pair is correlated with the accuracy/success of automatic co-registration; (2) masking out shadows prior to match point select does not improve the success of image-to-image co-registration; and (3) normalizing or brightening shadows can help match point routines find more match points and therefore improve performance of automatic co-registration. Normalizing shadows via a standard linear correction provided the most reliable co-registration results in image pairs containing substantial amounts of relative shadow movement, but had minimal effect for pairs with stationary shadows. Numéro de notice : A2020-147 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.86.4.177 Date de publication en ligne : 01/03/2020 En ligne : https://doi.org/10.14358/PERS.86.4.177 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94776
in Photogrammetric Engineering & Remote Sensing, PERS > vol 86 n° 3 (March 2020) . - pp 177 - 186[article]Sea-land segmentation using deep learning techniques for Landsat-8 OLI imagery / Ting Yang in Marine geodesy, Vol 43 n° 2 (March 2020)
![]()
[article]
Titre : Sea-land segmentation using deep learning techniques for Landsat-8 OLI imagery Type de document : Article/Communication Auteurs : Ting Yang, Auteur ; Zhonghua Hong, Auteur ; Yun Zhang, Auteur Année de publication : 2020 Article en page(s) : pp 105 - 133 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image Landsat-OLI
[Termes IGN] littoral
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] trait de côteRésumé : (auteur) Automated coastline extraction from optical satellites is fundamental to coastal mapping, and sea-land segmentation is the core technology of coastline extraction. Deep convolutional neural networks (DCNNs) have performed well in semantic segmentation in recent years. However, sea-land segmentation using deep learning techniques remains a challenging task, due to the lack of a benchmark dataset and the difficulty of deciding which semantic segmentation model to use. We present a comparative framework of sea-land segmentation to Landsat-8 OLI imagery via semantic segmentation in deep learning techniques. Three issues are investigated: (1) constructing a sea-land benchmark dataset using Landsat-8 Operational Land Imager (OLI) imagery consisting of 18,000 km2 of coastline around China; (2) evaluating the feasibility and performance of sea-land segmentation by comparing the accuracy assessment, time complexity, spatial complexity and stability of state-of-the-art DCNNs methods; (3) choosing the most suitable semantic segmentation model for sea-land segmentation in accordance with Akaike information criterion (AIC) and Bayesian information criterion (BIC) model selection. Results show that the average test accuracy achieves over 99% accuracy, and the mean Intersection over Unions (mean IoU) is above 92%. These findings demonstrate that the Fully Convolutional DenseNet (FC-enseNet) performs better than other state-of-the-art methods in sea-land segmentation, based on both AIC and BIC. Considering training time efficiency, DeeplabV3+ performs better for sea-land segmentation. The sea-land segmentation benchmark dataset is available at: https://pan.baidu.com/s/1BlnHiltOLbLKe4TG8lZ5xg. Numéro de notice : A2020-220 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/01490419.2020.1713266 Date de publication en ligne : 20/01/2020 En ligne : https://doi.org/10.1080/01490419.2020.1713266 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94917
in Marine geodesy > Vol 43 n° 2 (March 2020) . - pp 105 - 133[article]Simultaneous intensity bias estimation and stripe noise removal in infrared images using the global and local sparsity constraints / Li Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 3 (March 2020)
![]()
[article]
Titre : Simultaneous intensity bias estimation and stripe noise removal in infrared images using the global and local sparsity constraints Type de document : Article/Communication Auteurs : Li Liu, Auteur ; Luping Xu, Auteur ; Houzhang Fang, Auteur Année de publication : 2020 Article en page(s) : pp 1777 - 1789 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse bivariée
[Termes IGN] analyse comparative
[Termes IGN] filtrage du bruit
[Termes IGN] image infrarouge
[Termes IGN] intensité lumineuse
[Termes IGN] interpolation polynomiale
[Termes IGN] itération
[Termes IGN] optimisation (mathématiques)
[Termes IGN] programmation par contraintes
[Termes IGN] texture d'imageRésumé : (Auteur) Infrared (IR) images are often contaminated by obvious intensity bias and stripes, which severely affect the visual quality and subsequent applications. It is challenging to eliminate simultaneously the mixed nonuniformity noise without blurring the fine-image details in low-textured IR images. In this article, we present a new model for simultaneous intensity bias correction and destriping through introducing two sparsity constraints. One is that model fit on the intensity bias should be as accurate as possible. A bivariate polynomial model is built to characterize the global smoothness of the intensity bias. The other constraint is that the unidirectional variational sparse model can concisely represent the direction characteristic of stripe noise. A computationally efficient numerical algorithm based on split Bregman iteration is used to solve the complex optimization problem. The proposed method is fundamentally different from the existing denoising techniques and simultaneously estimates the sharp image, intensity bias, and stripe components. Significant improvement on image quality is achieved on both simulated and real studies. Both qualitative and quantitative comparisons with the state-of-the-art correction methods demonstrate its superiority. Numéro de notice : A2020-089 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2948601 Date de publication en ligne : 18/11/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2948601 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94663
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 3 (March 2020) . - pp 1777 - 1789[article]Spectral–spatial–temporal MAP-based sub-pixel mapping for land-cover change detection / Da He in IEEE Transactions on geoscience and remote sensing, vol 58 n° 3 (March 2020)
![]()
[article]
Titre : Spectral–spatial–temporal MAP-based sub-pixel mapping for land-cover change detection Type de document : Article/Communication Auteurs : Da He, Auteur ; Yanfei Zhong, Auteur ; Liangpei Zhang, Auteur Année de publication : 2020 Article en page(s) : pp 1696 - 1717 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] changement d'occupation du sol
[Termes IGN] classification du maximum a posteriori
[Termes IGN] détection de changement
[Termes IGN] distribution spatiale
[Termes IGN] données spatiotemporelles
[Termes IGN] image Aqua-MODIS
[Termes IGN] image Landsat-8
[Termes IGN] image Landsat-TM
[Termes IGN] image multibande
[Termes IGN] image Quickbird
[Termes IGN] image Terra-MODIS
[Termes IGN] modèle dynamique
[Termes IGN] optimisation spatiale
[Termes IGN] précision infrapixellaire
[Termes IGN] série temporelle
[Termes IGN] urbanisation
[Termes IGN] Wuhan (Chine)
[Termes IGN] zone urbaineRésumé : (Auteur) The maximum a posteriori (MAP) estimation model-based sub-pixel mapping (SPM) method is an alternative way to solve the ill-posed SPM problem. The MAP estimation model has been proven to be an effective SPM approach and has been extensively developed over the past few years, as a result of its effective regularization capability that comes from the spatial regularization model. However, various spatial regularization models do not always truly reflect the detailed spatial distribution in a real situation, and the over-smoothing effect of the spatial regularization model always tends to efface the detailed structural information. In this article, under the scenario of time-series observation by remote sensing imagery, the joint spectral–spatial–temporal MAP-based (SST_MAP) model for SPM is proposed. In SST_MAP, a newly developed temporal regularization model is added to the MAP model, based on the prerequisite for a temporally close fine image covering the same study region. This available fine image can provide the specific spatial structures most closely conforming to the ground truth for a more precise constraint, thereby reducing the over-smoothing effect. Furthermore, the three dimensions are mutually balanced and mutually constrained, to reach an equilibrium point and achieve restoration of both smooth areas for the homogeneous land-cover classes and a detailed structure for the heterogeneous land-cover classes. Four experiments were designed to validate the proposed SST_MAP: three synthetic-image experiments and one real-image experiment. The restoration results confirm the superiority of the proposed SST_MAP model. Notably, under the background of time-series observation, SST_MAP provides an alternative way of land-cover change detection (LCCD), achieving both detailed spatial-scale and high-frequency temporal LCCD observation for the study case of urbanization analysis within the city of Wuhan in China. Numéro de notice : A2020-088 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2947708 Date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2947708 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94662
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 3 (March 2020) . - pp 1696 - 1717[article]The application of bidirectional reflectance distribution function data to recognize the spatial heterogeneity of mixed pixels in vegetation remote sensing: a simulation study / Yanan Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)
PermalinkUnsupervised extraction of urban features from airborne lidar data by using self-organizing maps / Alper Sen in Survey review, vol 52 n° 371 (March 2020)
PermalinkAutomated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference / Heidar Rastiveis in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
PermalinkComputer vision-based framework for extracting tectonic lineaments from optical remote sensing data / Ehsan Farahbakhsh in International Journal of Remote Sensing IJRS, vol 41 n°5 (01 - 08 février 2020)
PermalinkMulti-spectral image change detection based on single-band iterative weighting and fuzzy C-means clustering / Liyuan Ma in European journal of remote sensing, vol 53 n° 1 (2020)
PermalinkA novel fire index-based burned area change detection approach using Landsat-8 OLI data / Sicong Liu in European journal of remote sensing, vol 53 n° 1 (2020)
PermalinkThree-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering / Shangpeng Sun in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
PermalinkCombining GF-2 and RapidEye satellite data for mapping mangrove species using ensemble machine-learning methods / Liheng Peng in International Journal of Remote Sensing IJRS, vol 41 n° 3 (15 - 22 janvier 2020)
PermalinkExtracting soil salinization information with a fractional-order filtering algorithm and grid-search support vector machine (GS-SVM) model / Xiaoping Wang in International Journal of Remote Sensing IJRS, vol 41 n° 3 (15 - 22 janvier 2020)
PermalinkSpatial visualization of quantitative landscape changes in an industrial region between 1827 and 1883. Case study Katowice, southern Poland / Paweł Cybulski in Journal of maps, vol 16 n° 1 ([02/01/2020])
Permalink10th Colour and Visual Computing Symposium 2020 (CVCS 2020), Gjøvik, Norway, and Virtual, September 16-17, 2020 / Jean-Baptiste Thomas (2020)
Permalink3D iterative spatiotemporal filtering for classification of multitemporal satellite data sets / Hessah Albanwan in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 1 (January 2020)
PermalinkPermalinkAdvances in Intelligent Data Analysis XVIII : 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27–29 2020 / Michael R. Berthold (2020)
PermalinkAnalyse automatique du couvert végétal pour la gestion du risque végétation en milieu ferroviaire à partir d'imagerie aérienne / Hélène Rouillon (2020)
PermalinkAnalyse, structuration et sémantisation des images aériennes [diaporama] / Valérie Gouet-Brunet (2020)
PermalinkApplication of digital image processing in automated analysis of insect leaf mines / Yee Man Theodora Cho (2020)
PermalinkApplication of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
PermalinkPermalinkAutocovariance-based perceptual textural features corresponding to human visual perception / N. Abbadeni (2020)
Permalink