Descripteur
Termes IGN > 1- Outils - instruments et méthodes > instrument > véhicule > véhicule sans pilote > drone
droneVoir aussi |
Documents disponibles dans cette catégorie (218)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A deep learning approach to DTM extraction from imagery using rule-based training labels / Caroline M. Gevaert in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)
[article]
Titre : A deep learning approach to DTM extraction from imagery using rule-based training labels Type de document : Article/Communication Auteurs : Caroline M. Gevaert, Auteur ; Claudio Persello, Auteur ; M. George Vosselman, Auteur Année de publication : 2018 Article en page(s) : pp 106 - 123 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de règles
[Termes IGN] benchmark spatial
[Termes IGN] Dar-es-Salam (Tanzanie)
[Termes IGN] drone
[Termes IGN] échantillonnage d'image
[Termes IGN] extraction automatique
[Termes IGN] Kigali (Rwanda)
[Termes IGN] Lombardie
[Termes IGN] modèle numérique de terrain
[Termes IGN] photogrammétrie aérienne
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Existing algorithms for Digital Terrain Model (DTM) extraction still face difficulties due to data outliers and geometric ambiguities in the scene such as contiguous off-ground areas or sloped environments. We postulate that in such challenging cases, the radiometric information contained in aerial imagery may be leveraged to distinguish between ground and off-ground objects. We propose a method for DTM extraction from imagery which first applies morphological filters to the Digital Surface Model to obtain candidate ground and off-ground training samples. These samples are used to train a Fully Convolutional Network (FCN) in the second step, which can then be used to identify ground samples for the entire dataset. The proposed method harnesses the power of state-of-the-art deep learning methods, while showing how they can be adapted to the application of DTM extraction by (i) automatically selecting and labelling dataset-specific samples which can be used to train the network, and (ii) adapting the network architecture to consider a larger surface area without unnecessarily increasing the computational burden. The method is successfully tested on four datasets, indicating that the automatic labelling strategy can achieve an accuracy which is comparable to the use of manually labelled training samples. Furthermore, we demonstrate that the proposed method outperforms two reference DTM extraction algorithms in challenging areas. Numéro de notice : A2018-298 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.06.001 Date de publication en ligne : 15/06/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.06.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90410
in ISPRS Journal of photogrammetry and remote sensing > vol 142 (August 2018) . - pp 106 - 123[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018083 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018082 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Detecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques / Chinsu Lin in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)
[article]
Titre : Detecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques Type de document : Article/Communication Auteurs : Chinsu Lin, Auteur ; Shih-Yu Chen, Auteur ; Chia-Chun Chen, Auteur ; Chia-Huei Tai, Auteur Année de publication : 2018 Article en page(s) : pp 174 - 189 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse d'image orientée objet
[Termes IGN] changement climatique
[Termes IGN] croissance des arbres
[Termes IGN] drone
[Termes IGN] feuille (végétation)
[Termes IGN] image aérienne
[Termes IGN] image hyperspectrale
[Termes IGN] image RVB
[Termes IGN] indice de végétation
[Termes IGN] Kappa de Cohen
[Termes IGN] Taïwan
[Vedettes matières IGN] Végétation et changement climatiqueRésumé : (auteur) Phenological events of tree leaves from initiation to senescence is generally influenced by temperature and water availability. Detection of newly grown leaves (NGL) is useful in the diagnosis of growth of trees, tree stress and even climatic change. Utilizing very high resolution UAV images, this paper examines the feasibility of NGL detection using hyperspectral detection algorithms and anomaly detectors. The issues of pixel resolution and hard decision thresholding in deriving accurate NGL maps are also explored. Results showed that the blind-detection algorithms RXDs are not suitable for NGL detection due to the spectra similarity between NGL and both mature leaves and grass, while brighter pixels, such as those produced by soil and concrete materials, are more easily recognized as anomaly in contrast to forest. Matching filter (MF) based detectors are, however, able to accurately detect NGL over forest stands and are even more effective in the sense of achieving satisfactory true positives and true negatives while providing minimal false alarms. Of the tested partial knowledge MF algorithms, the covariance matched filter based distance (KMFD) detector performed very well with overall accuracy (OA) 0.97 and kappa coefficient () 0.60 on a natural resolution of 6.75 cm image. When a variety of mature-leaf nonobjective targets are included in the detection, the orthogonal subspace projector (OSP) tends to suppress NGL pixels as an unwanted signature and this leads to poor detection. Conversely, the target constrained interference minimized filter (TCIMF) detector is still able to effectively detect NGL with a satisfactory OA and through effective matching filter of the target signature as the hard-decision threshold is subject to a level of 5% or 1% probability of false alarms. From decimeter resolution satellite images, the KMFD and TCIMF detectors are capable of achieving an accuracy of OA = 0.94 and = 0.56 or OA = 0.87 and = 0.48 for images with a resolution of 33.75 cm or 67.50 cm respectively. This indicates that hyperspectral target detection techniques have great potential in NGL detection via high spatial resolution satellite multispectral images. Numéro de notice : A2018-294 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.05.022 Date de publication en ligne : 15/06/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.05.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90412
in ISPRS Journal of photogrammetry and remote sensing > vol 142 (August 2018) . - pp 174 - 189[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018083 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018082 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Drones et SIG / Anonyme in Géomatique expert, n° 122 (mai-juin 2018)
[article]
Titre : Drones et SIG Type de document : Article/Communication Auteurs : Anonyme, Auteur Année de publication : 2018 Article en page(s) : pp 48 - 55 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] caméra numérique
[Termes IGN] drone
[Termes IGN] image captée par droneRésumé : (éditeur) Le temps de la jeunesse des drones est bel et bien révolu. Après le foisonnement d’acteurs promettant monts et merveilles avec des technologies et des moyens souvent mal maîtrisés, l’heure est maintenant à la consolidation. L’industrie s’est scindée en trois segments bien distincts. D’une part, les fabricants de matériel, eux-mêmes divisés en deux branches : les fabricants de « vecteurs », les appareils volants, et les fabricants de capteurs optimisés ; d’autre part, les éditeurs de logiciels, qu’il s’agisse de préparer la mission, de télécommander le drone en vol, ou bien de collecter et de traiter les données. Dans ces domaines, nous avons cherché à savoir quel était l’état de l’art et les tendances pour les mois qui viennent. Numéro de notice : A2018-266 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90324
in Géomatique expert > n° 122 (mai-juin 2018) . - pp 48 - 55[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité IFN-001-P002060 PER Revue Nogent-sur-Vernisson Salle périodiques Exclu du prêt Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography Type de document : Article/Communication Auteurs : Lukas Roth, Auteur ; Helge Aasen, Auteur ; Achim Walter, Auteur ; Frank Liebisch, Auteur Année de publication : 2018 Article en page(s) : pp 161 - 175 Note générale : Bibliography Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] cultures
[Termes IGN] drone
[Termes IGN] Glycine max
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] indice foliaire
[Termes IGN] Leaf Area Index
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] orthoimage géoréférencée
[Termes IGN] segmentation d'image
[Termes IGN] simulation 3D
[Termes IGN] SuisseRésumé : (Editeur) Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device ( of , RMSE of m 2m−2) and correlated well with destructive LAI measurements ( of , RMSE of m2 m−2). These results indicate that, if respecting the range (LAI ) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science. Numéro de notice : A2018-287 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.012 Date de publication en ligne : 07/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90402
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 161 - 175[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)
[article]
Titre : Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification Type de document : Article/Communication Auteurs : Tao Liu, Auteur ; Amr Abd-Elrahman, Auteur Année de publication : 2018 Article en page(s) : pp 154 - 170 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] drone
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] zone humideRésumé : (Auteur) Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework. Numéro de notice : A2018-114 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.03.006 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.03.006 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89550
in ISPRS Journal of photogrammetry and remote sensing > vol 139 (May 2018) . - pp 154 - 170[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2018051 RAB Revue Centre de documentation En réserve L003 Disponible A novel orthoimage mosaic method using a weighted A∗ algorithm : Implementation and evaluation / Maoteng Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)PermalinkAnalyse du risque végétation dans les emprises ferroviaires à partir de données LiDAR acquises par drones / Luc Perrin in XYZ, n° 154 (mars - mai 2018)PermalinkImage classification-based ground filtering of point clouds extracted from UAV-based aerial photos / Volkan Yilmaz in Geocarto international, vol 33 n° 3 (March 2018)PermalinkLittoral, "Ricochet" ausculte / Marielle Mayo in Géomètre, n° 2155 (février 2018)PermalinkPermalinkPermalinkPermalinkSentinel-2 data analysis and comparison with UAV multispectral images for precision viticulture / Frederica Nonni in GI Forum, vol 2018 n° 1 ([01/01/2018])PermalinkTERRISCOPE, une nouvelle plateforme mutualisée de recherche en télédétection optique à partir d’avions et de drones / Yannick Boucher (2018)PermalinkPermalink