ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 160Paru le : 01/02/2020 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2020021 | RAB | Revue | Centre de documentation | En réserve L003 | Disponible |
081-2020023 | DEP-RECP | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2020022 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierA two-step approach for the correction of rolling shutter distortion in UAV photogrammetry / Yilin Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : A two-step approach for the correction of rolling shutter distortion in UAV photogrammetry Type de document : Article/Communication Auteurs : Yilin Zhou , Auteur ; Mehdi Daakir , Auteur ; Ewelina Rupnik , Auteur ; Marc Pierrot-Deseilligny , Auteur Année de publication : 2020 Projets : 1-Pas de projet / Article en page(s) : pp 51 - 66 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] capteur aérien
[Termes IGN] compensation par faisceaux
[Termes IGN] correction
[Termes IGN] création de zone intermédiaire
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage en vol
[Termes IGN] image captée par drone
[Termes IGN] MicMac
[Termes IGN] obturateur
[Termes IGN] photogrammétrie aérienneRésumé : (Auteur) The use of consumer grade unmanned aerial vehicles (UAV) is becoming more and more ubiquitous in photogrammetric applications. A large proportion of consumer grade UAVs are equipped with CMOS image sensor and rolling shutter. When imaging with a rolling shutter camera, the image sensor is exposed line by line, which can introduce additional distortions in image space since the UAV navigates at a relatively high speed during aerial acquisitions. In this paper, we propose (1) an approach to calibrate the readout time of rolling shutter camera, (2) a two-step method to correct the image distortion introduced by this effect. The two-step method makes assumption that during exposure, the change of camera orientation is negligible with respect to the change of camera position, which is often the case when camera is fixed on a stabilized mount. Firstly, the camera velocity is estimated from the results of an initial bundle block adjustment; then, one camera pose per scan-line of the image sensor is recovered and image observations are corrected. To evaluate the performance of the proposed method, four datasets of block and corridor configurations are acquired with the DJI Mavic 2 Pro and its original Hasselbald L1D-20c camera. The proposed method is implemented in MicMac, a free, open-source photogrammetric software; comparisons are carried out with other two mainstream software, AgiSoft MetaShape and Pix4D, which also have the functionality of rolling shutter effect correction. For block configuration datasets, the three software give comparable results. AgiSoft Metashape and Pix4D are sensitive to the flight configuration and encounter difficulties when processing datasets in corridor configurations. The proposed method shows good robustness both in block and corridor configurations, and is the only method that works in corridor configuration. After the application of the rolling shutter effect correction, the 3D accuracy is improved by 30–60% in block configuration and 15–25% in corridor configuration. A further improvement can be expected if a precise dating of image is available or if the camera positions can be directly extracted from GNSS data. Numéro de notice : A2020-018 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.020 Date de publication en ligne : 16/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.020 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94456
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 51 - 66[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Transferring deep learning models for cloud detection between Landsat-8 and Proba-V / Gonzalo Mateo-García in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Transferring deep learning models for cloud detection between Landsat-8 and Proba-V Type de document : Article/Communication Auteurs : Gonzalo Mateo-García, Auteur ; Valero Laparra, Auteur ; Dan López-Puigdollers, Auteur ; Luis Gómez-Chova, Auteur Année de publication : 2020 Article en page(s) : pp 1 - 17 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage par transformation
[Termes IGN] apprentissage profond
[Termes IGN] conversion de données
[Termes IGN] détection des nuages
[Termes IGN] échantillonnage de données
[Termes IGN] image Landsat-8
[Termes IGN] image multibande
[Termes IGN] image PROBA
[Termes IGN] jeu de données
[Termes IGN] masque
[Termes IGN] réseau neuronal convolutif
[Termes IGN] seuillage de pointsRésumé : (Auteur) Accurate cloud detection algorithms are mandatory to analyze the large streams of data coming from the different optical Earth observation satellites. Deep learning (DL) based cloud detection schemes provide very accurate cloud detection models. However, training these models for a given sensor requires large datasets of manually labeled samples, which are very costly or even impossible to create when the satellite has not been launched yet. In this work, we present an approach that exploits manually labeled datasets from one satellite to train deep learning models for cloud detection that can be applied (or transferred) to other satellites. We take into account the physical properties of the acquired signals and propose a simple transfer learning approach using Landsat-8 and Proba-V sensors, whose images have different but similar spatial and spectral characteristics. Three types of experiments are conducted to demonstrate that transfer learning can work in both directions: (a) from Landsat-8 to Proba-V, where we show that models trained only with Landsat-8 data produce cloud masks 5 points more accurate than the current operational Proba-V cloud masking method, (b) from Proba-V to Landsat-8, where models that use only Proba-V data for training have an accuracy similar to the operational FMask in the publicly available Biome dataset (87.79–89.77% vs 88.48%), and (c) jointly from Proba-V and Landsat-8 to Proba-V, where we demonstrate that using jointly both data sources the accuracy increases 1–10 points when few Proba-V labeled images are available. These results highlight that, taking advantage of existing publicly available cloud masking labeled datasets, we can create accurate deep learning based cloud detection models for new satellites, but without the burden of collecting and labeling a large dataset of images. Numéro de notice : A2020-043 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.024 Date de publication en ligne : 10/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.024 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94522
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 1 - 17[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Optimising drone flight planning for measuring horticultural tree crop structure / Yu-Hsuan Tu in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Optimising drone flight planning for measuring horticultural tree crop structure Type de document : Article/Communication Auteurs : Yu-Hsuan Tu, Auteur ; Stuart Phinn, Auteur ; Kasper Johansen, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 83 - 96 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] correction d'image
[Termes IGN] détection d'arbres
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage de chambre métrique
[Termes IGN] horticulture
[Termes IGN] image captée par drone
[Termes IGN] MicMac
[Termes IGN] obturateur
[Termes IGN] photogrammétrie aérienne
[Termes IGN] plan de vol
[Termes IGN] point d'appui
[Termes IGN] qualité d'image
[Termes IGN] Queensland (Australie)
[Termes IGN] semis de pointsRésumé : (Auteur) In recent times, multi-spectral drone imagery has proved to be a useful tool for measuring tree crop canopy structure. In this context, establishing the most appropriate flight planning variable settings is an essential consideration due to their controls on the quality of the imagery and derived maps of tree and crop biophysical properties. During flight planning, variables including flight altitude, image overlap, flying direction, flying speed and solar elevation, require careful consideration in order to produce the most suitable drone imagery. Previous studies have assessed the influence of individual variables on image quality, but the interaction of multiple variables has yet to be examined. This study assesses the influence of several flight variables on measures of data quality in each processing step, i.e. photo alignment, point cloud densification, 3D model building, and ortho-mosaicking. The analysis produced a drone flight planning and image processing workflow that delivers accurate measurements of tree crops, including the tie point quality, densified point cloud density, and the measurement accuracy of height and plant projective cover derived from individual trees within a commercial avocado orchard. Results showed that flying along the hedgerow, at high solar elevation and with low image pitch angles improved the data quality. Optimal flying speed needs to be set to achieve the required forward overlap. The impacts of each image acquisition variable are discussed in detail and protocols for flight planning optimisation for three scenarios with different drone settings are suggested. Establishing protocols that deliver optimal image acquisitions for the collection of drone data over horticultural tree crops, will create greater confidence in the accuracy of subsequent algorithms and resultant maps of biophysical properties. Numéro de notice : A2020-044 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.006 Date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.006 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94524
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 83 - 96[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; José Marcato Junior, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 97 - 106 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Brésil
[Termes IGN] carte de confiance
[Termes IGN] Citrus (genre)
[Termes IGN] détection d'arbres
[Termes IGN] géolocalisation
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] inventaire de la végétation
[Termes IGN] réseau neuronal convolutif
[Termes IGN] vergerRésumé : (Auteur) Visual inspection has been a common practice to determine the number of plants in orchards, which is a labor-intensive and time-consuming task. Deep learning algorithms have demonstrated great potential for counting plants on unmanned aerial vehicle (UAV)-borne sensor imagery. This paper presents a convolutional neural network (CNN) approach to address the challenge of estimating the number of citrus trees in highly dense orchards from UAV multispectral images. The method estimates a dense map with the confidence that a plant occurs in each pixel. A flight was conducted over an orchard of Valencia-orange trees planted in linear fashion, using a multispectral camera with four bands in green, red, red-edge and near-infrared. The approach was assessed considering the individual bands and their combinations. A total of 37,353 trees were adopted in point feature to evaluate the method. A variation of σ (0.5; 1.0 and 1.5) was used to generate different ground truth confidence maps. Different stages (T) were also used to refine the confidence map predicted. To evaluate the robustness of our method, we compared it with two state-of-the-art object detection CNN methods (Faster R-CNN and RetinaNet). The results show better performance with the combination of green, red and near-infrared bands, achieving a Mean Absolute Error (MAE), Mean Square Error (MSE), R2 and Normalized Root-Mean-Squared Error (NRMSE) of 2.28, 9.82, 0.96 and 0.05, respectively. This band combination, when adopting σ = 1 and a stage (T = 8), resulted in an R2, MAE, Precision, Recall and F1 of 0.97, 2.05, 0.95, 0.96 and 0.95, respectively. Our method outperforms significantly object detection methods for counting and geolocation. It was concluded that our CNN approach developed to estimate the number and geolocation of citrus trees in high-density orchards is satisfactory and is an effective strategy to replace the traditional visual inspection method to determine the number of plants in orchards trees. Numéro de notice : A2020-045 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.010 Date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94525
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 97 - 106[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods / Elisa Kamir in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods Type de document : Article/Communication Auteurs : Elisa Kamir, Auteur ; François Waldner, Auteur ; Zvi Hochman, Auteur Année de publication : 2020 Article en page(s) : pp 124 - 135 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage automatique
[Termes IGN] Australie
[Termes IGN] blé (céréale)
[Termes IGN] carte agricole
[Termes IGN] climat
[Termes IGN] estimation de précision
[Termes IGN] fonction de base radiale
[Termes IGN] image satellite
[Termes IGN] modèle de croissance végétale
[Termes IGN] modèle non linéaire
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] régression
[Termes IGN] rendement agricole
[Termes IGN] série temporelle
[Termes IGN] variation saisonnièreRésumé : (Auteur) Closing the yield gap between actual and potential wheat yields in Australia is important to meet the growing global demand for food. The identification of hotspots of the yield gap, where the potential for improvement is the greatest, is a necessary step towards this goal. While crop growth models are well suited to quantify potential yields, they lack the ability to provide accurate large-scale estimates of actual yields, owing to the sheer quantity of data they require for parameterisation. In this context, we sought to provide accurate estimates of actual wheat yields across the Australian wheat belt based on machine-learning regression methods, climate records and satellite image time series. Out of nine base learners and two ensembles, support vector regression with radial basis function emerged as the single best learner (root mean square error of 0.55 t ha−1 and R2 of 0.77 at the pixel level). At national scale, this model explained 73% of the yield variability observed across statistical units. Benchmark approaches based on peak Normalised Difference Vegetation Index (NDVI) and on a harvest index were largely outperformed by the machine-learning regression models (R2 Numéro de notice : A2020-046 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.008 Date de publication en ligne : 20/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.008 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94556
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 124 - 135[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Automated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference / Heidar Rastiveis in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Automated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference Type de document : Article/Communication Auteurs : Heidar Rastiveis, Auteur ; Alireza Shams, Auteur ; Wayne A. Sarasua, Auteur ; Jonathan Li, Auteur Année de publication : 2020 Article en page(s) : pp 149 - 166 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] autoroute
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction automatique
[Termes IGN] extraction de points
[Termes IGN] extraction du réseau routier
[Termes IGN] Inférence floue
[Termes IGN] lidar mobile
[Termes IGN] modélisation 3D
[Termes IGN] semis de points
[Termes IGN] transformation de HoughRésumé : (Auteur) Mobile LiDAR systems (MLS) are rapid and accurate technologies for acquiring three-dimensional (3D) point clouds that can be used to generate 3D models of road environments. Because manual extraction of desirable features such as road traffic signs, trees, and pavement markings from these point clouds is tedious and time-consuming, automatic information extraction of these objects is desirable. This paper proposes a novel automatic method to extract pavement lane markings (LMs) using point attributes associated with the MLS point cloud based on fuzzy inference. The proposed method begins with dividing the MLS point cloud into a number of small sections (e.g. tiles) along the route. After initial filtering of non-ground points, each section is vertically aligned. Next, a number of candidate LM areas are detected using a Hough Transform (HT) algorithm and considering a buffer area around each line. The points inside each area are divided into “probable-LM” and “non-LM” clusters. After extracting geometric and radiometric descriptors for the “probable-LM” clusters and analyzing them in a fuzzy inference system, true-LM clusters are eventually detected. Finally, the extracted points are enhanced and transformed back to their original position. The efficiency of the method was tested on two different point cloud datasets along 15.6 km and 9.5 km roadway corridors. Comparing the LMs extracted using the algorithm with the manually extracted LMs, 88% of the LM lines were successfully extracted in both datasets. Numéro de notice : A2020-047 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.009 Date de publication en ligne : 20/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94558
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 149 - 166[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering / Shangpeng Sun in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering Type de document : Article/Communication Auteurs : Shangpeng Sun, Auteur ; Changying Li, Auteur ; Peng Wah Chee, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 195 - 207 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] cartographie 3D
[Termes IGN] classification basée sur les régions
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de la végétation
[Termes IGN] gestion de production
[Termes IGN] Gossypium (genre)
[Termes IGN] phénologie
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] surveillance de la végétationRésumé : (Auteur) Three-dimensional high throughput plant phenotyping techniques provide an opportunity to measure plant organ-level traits which can be highly useful to plant breeders. The number and locations of cotton bolls, which are the fruit of cotton plants and an important component of fiber yield, are arguably among the most important phenotypic traits but are complex to quantify manually. Hence, there is a need for effective and efficient cotton boll phenotyping solutions to support breeding research and monitor the crop yield leading to better production management systems. We developed a novel methodology for 3D cotton boll mapping within a plot in situ. Point clouds were reconstructed from multi-view images using the structure from motion algorithm. The method used a region-based classification algorithm that successfully accounted for noise due to sunlight. The developed density-based clustering method could estimate boll counts for this situation, in which bolls were in direct contact with other bolls. By applying the method to point clouds from 30 plots of cotton plants, boll counts, boll volume and position data were derived. The average accuracy of boll counting was up to 90% and the R2 values between fiber yield and boll number, as well as fiber yield and boll volume were 0.87 and 0.66, respectively. The 3D boll spatial distribution could also be analyzed using this method. This method, which was low-cost and provided improved site-specific data on cotton bolls, can also be applied to other plant/fruit mapping analysis after some modification. Numéro de notice : A2020-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.011 Date de publication en ligne : 25/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94561
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 195 - 207[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt