Descripteur
Documents disponibles dans cette catégorie (959)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN) / Zhenbang Hao in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
![]()
[article]
Titre : Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN) Type de document : Article/Communication Auteurs : Zhenbang Hao, Auteur ; Lili Lin, Auteur ; Christopher J. Post, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 112 - 123 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Abies (genre)
[Termes IGN] Abies numidica
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] hauteur des arbres
[Termes IGN] houppier
[Termes IGN] image captée par drone
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] plantation forestièreRésumé : (auteur) Tree-crown and height are primary tree measurements in forest inventory. Convolutional neural networks (CNNs) are a class of neural networks, which can be used in forest inventory; however, no prior studies have developed a CNN model to detect tree crown and height simultaneously. This study is the first-of-its-kind that explored training a mask region-based convolutional neural network (Mask R-CNN) for automatically and concurrently detecting discontinuous tree crown and height of Chinese fir (Cunninghamia lanceolata (Lamb) Hook) in a plantation. A DJI Phantom4-Multispectral Unmanned Aerial Vehicle (UAV) was used to obtain high-resolution images of the study site, Shunchang County, China. Tree crown and height of Chinese fir was manually delineated and derived from this UAV imagery. A portion of the ground-truthed tree height values were used as a test set, and the remaining measurements were used as the model training data. Six different band combinations and derivations of the UAV imagery were used to detect tree crown and height, respectively (Multi band-DSM, RGB-DSM, NDVI-DSM, Multi band-CHM, RGB-CHM, and NDVI-CHM combination). The Mask R-CNN model with the NDVI-CHM combination achieved superior performance. The accuracy of Chinese fir’s individual tree-crown detection was considerable (F1 score = 84.68%), the Intersection over Union (IoU) of tree crown delineation was 91.27%, and tree height estimates were highly correlated with the height from UAV imagery (R2 = 0.97, RMSE = 0.11 m, rRMSE = 4.35%) and field measurement (R2 = 0.87, RMSE = 0.24 m, rRMSE = 9.67%). Results demonstrate that the input image with an CHM achieves higher accuracy of tree crown delineation and tree height assessment compared to an image with a DSM. The accuracy and efficiency of Mask R-CNN has a great potential to assist the application of remote sensing in forests. Numéro de notice : A2021-563 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2021.06.003 Date de publication en ligne : 18/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98128
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 112 - 123[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network / Fengpeng Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 8 (August 2021)
![]()
[article]
Titre : Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network Type de document : Article/Communication Auteurs : Fengpeng Li, Auteur ; Jiabao Li, Auteur ; Wei Han, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 577 - 591 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal
[Termes IGN] grande échelle
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] moyenne échelle
[Termes IGN] petite échelle
[Termes IGN] régression linéaire
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set. Numéro de notice : A2021-670 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.8.577 Date de publication en ligne : 01/08/2021 En ligne : https://doi.org/10.14358/PERS.87.8.577 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98806
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 8 (August 2021) . - pp 577 - 591[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021081 SL Revue Centre de documentation Revues en salle Disponible Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area / Xungen Li in ISPRS International journal of geo-information, vol 10 n° 8 (August 2021)
![]()
[article]
Titre : Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area Type de document : Article/Communication Auteurs : Xungen Li, Auteur ; Feifei Men, Auteur ; Shuaishuai Lv, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 549 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] véhiculeRésumé : (auteur) Vehicle detection in aerial images is a challenging task. The complexity of the background information and the redundancy of the detection area are the main obstacles that limit the successful operation of vehicle detection based on anchors in very-high-resolution (VHR) remote sensing images. In this paper, an anchor-free target detection method is proposed to solve the problems above. First, a multi-attention feature pyramid network (MA-FPN) was designed to address the influence of noise and background information on vehicle target detection by fusing attention information in the feature pyramid network (FPN) structure. Second, a more precise foveal area (MPFA) is proposed to provide better ground truth for the anchor-free method by determining a more accurate positive sample selection area. The proposed anchor-free model with MA-FPN and MPFA can predict vehicles accurately and quickly in VHR remote sensing images through direct regression and predict the pixels in the feature map. A detailed evaluation based on remote sensing image (RSI) and vehicle detection in aerial imagery (VEDAI) data sets for vehicle detection shows that our detection method performs well, the network is simple, and the detection is fast. Numéro de notice : A2021-589 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10080549 Date de publication en ligne : 14/08/2021 En ligne : https://doi.org/10.3390/ijgi10080549 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98209
in ISPRS International journal of geo-information > vol 10 n° 8 (August 2021) . - n° 549[article]ComNet: combinational neural network for object detection in UAV-borne thermal images / Minglei Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
![]()
[article]
Titre : ComNet: combinational neural network for object detection in UAV-borne thermal images Type de document : Article/Communication Auteurs : Minglei Li, Auteur ; Xingke Zhao, Auteur ; Jiasong Li, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6662 - 6673 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] image thermique
[Termes IGN] piéton
[Termes IGN] saillance
[Termes IGN] véhiculeRésumé : (auteur) We propose a deep learning-based method for object detection in UAV-borne thermal images that have the capability of observing scenes in both day and night. Compared with visible images, thermal images have lower requirements for illumination conditions, but they typically have blurred edges and low contrast. Using a boundary-aware salient object detection network, we extract the saliency maps of the thermal images to improve the distinguishability. Thermal images are augmented with the corresponding saliency maps through channel replacement and pixel-level weighted fusion methods. Considering the limited computing power of UAV platforms, a lightweight combinational neural network ComNet is used as the core object detection method. The YOLOv3 model trained on the original images is used as a benchmark and compared with the proposed method. In the experiments, we analyze the detection performances of the ComNet models with different image fusion schemes. The experimental results show that the average precisions (APs) for pedestrian and vehicle detection have been improved by 2%~5% compared with the benchmark without saliency map fusion and MobileNetv2. The detection speed is increased by over 50%, while the model size is reduced by 58%. The results demonstrate that the proposed method provides a compromise model, which has application potential in UAV-borne detection tasks. Numéro de notice : A2021-632 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3029945 Date de publication en ligne : 21/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3029945 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98288
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6662 - 6673[article]Detecting high-temperature anomalies from Sentinel-2 MSI images / Yongxue Liu in ISPRS Journal of photogrammetry and remote sensing, vol 177 (July 2021)
![]()
[article]
Titre : Detecting high-temperature anomalies from Sentinel-2 MSI images Type de document : Article/Communication Auteurs : Yongxue Liu, Auteur ; Zhi Weifeng, Auteur ; Bihua Xu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 174 - 193 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse comparative
[Termes IGN] anomalie thermique
[Termes IGN] éruption volcanique
[Termes IGN] image aérienne
[Termes IGN] image Landsat-OLI
[Termes IGN] image proche infrarouge
[Termes IGN] image Sentinel-MSI
[Termes IGN] image thermique
[Termes IGN] incendie
[Termes IGN] réflectance spectrale
[Termes IGN] risque technologique
[Termes IGN] série temporelle
[Termes IGN] température au solRésumé : (Auteur) High-temperature anomalies (HTAs) of the earth's surface, such as fires, volcanic activities, and industrial heat sources, have a profound impact on Earth's system. Sentinel-2 Multispectral Instrument (MSI) provides spatially-specific information for precisely measuring the location and extent of HTAs at a fine scale. However, detecting HTAs from MSI images remains challenging because the emitted radiance of an HTA in the short-wave infrared (SWIR) bands can be easily mixed with the reflected solar radiance background in the daytime; and an increasing number of atypical cases in MSI images need to be treated with the enhanced spatial resolution. A generic HTA detection approach that handles both anthropogenic and natural HTAs will broaden the scope of MSI applications. In this study, (i) we highlight two spectral characteristics of HTAs in the far-SWIR, near-SWIR, and NIR bands (i.e., (ρfar-SWIR - ρnear-SWIR)/ρNIR ≥ 0.45 and (ρfar-SWIR -ρnear-SWIR) ≥ ρnear-SWIR - ρNIR) that can effectively enhance HTAs from background geo-features, based on the reflectance spectra in airborne imaging spectrometer data. (ii) We propose a tri-spectral thermal anomaly index (TAI) that jointly uses the two high-temperature-sensitive SWIR bands and the high-temperature-insensitive NIR band to enhance HTAs, based on the above characteristics and a comprehensive sampling of different types of HTAs from 1,974 MSI images. (iii) We develop a TAI-based approach for MSI images to detect HTAs in general. The proposed approach was applied to detect different types of HTAs, including different biomass burnings, active volcanoes, and industrial HTAs, over a wide range of land-cover scenarios. Validations and comparisons demonstrate the proposed approach is reliable and performs better than the existing state-of-the-art HTA detection approaches. Evaluations on two types of small industrial HTAs, including operating kilns and enclosed landfill gas flares, show that the HTA detection probability of the TAI-based approach from time-series MSI images is ~ 84.91% and 88.23%, respectively. Further investigations show that the TAI-based approach also has good transferability in detecting HTAs from multispectral images acquired by Landsat-family satellites. Numéro de notice : A2021-372 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.05.008 Date de publication en ligne : 23/05/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.05.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97808
in ISPRS Journal of photogrammetry and remote sensing > vol 177 (July 2021) . - pp 174 - 193[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021071 SL Revue Centre de documentation Revues en salle Disponible 081-2021073 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021072 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases / Chun Yang in ISPRS Journal of photogrammetry and remote sensing, vol 177 (July 2021)
PermalinkMulti-scale coal fire detection based on an improved active contour model from Landsat-8 satellite and UAV images / Yanyan Gao in ISPRS International journal of geo-information, vol 10 n° 7 (July 2021)
PermalinkRoad-network-based fast geolocalization / Yongfei Li in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)
PermalinkUnmanned aerial vehicles (UAV)-based canopy height modeling under leaf-on and leaf-off conditions for determining tree height and crown diameter (Case study: Hyrcanian mixed forest) / Vahid Nasiri in Canadian Journal of Forest Research, Vol 51 n° 7 (July 2021)
PermalinkUpdating of forest stand data by using recent digital photogrammetry in combination with older airborne laser scanning data / Niels Lindgren in Scandinavian journal of forest research, vol 36 n° 5 ([01/07/2021])
PermalinkAn incremental isomap method for hyperspectral dimensionality reduction and classification / Yi Ma in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)
PermalinkDomain adaptive transfer attack-based segmentation networks for building extraction from aerial images / Younghwan Na in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)
PermalinkReconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)
PermalinkDigital terrain models generated with low-cost UAV photogrammetry: Methodology and accuracy / Sergio Jiménez-Jiménez in ISPRS International journal of geo-information, vol 10 n° 5 (May 2021)
PermalinkIntegration of laser scanner and photogrammetry for heritage BIM enhancement / Yahya Alshawabkeh in ISPRS International journal of geo-information, vol 10 n° 5 (May 2021)
Permalink