Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > photogrammétrie > photogrammétrie numérique > orthoimage > orthoimage couleur
orthoimage couleur |
Documents disponibles dans cette catégorie (24)



Etendre la recherche sur niveau(x) vers le bas
Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
![]()
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]Adaptive transfer of color from images to maps and visualizations / Mingguang Wu in Cartography and Geographic Information Science, Vol 49 n° 4 (July 2022)
![]()
[article]
Titre : Adaptive transfer of color from images to maps and visualizations Type de document : Article/Communication Auteurs : Mingguang Wu, Auteur ; Yanjie Sun, Auteur ; Yaqian Li, Auteur Année de publication : 2022 Article en page(s) : pp 289 - 312 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] amélioration des couleurs
[Termes IGN] couleur (rédaction cartographique)
[Termes IGN] données vectorielles
[Termes IGN] esthétique cartographique
[Termes IGN] orthoimage couleur
[Termes IGN] relation sémantique
[Termes IGN] saillance
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Because crafting attractive and effective colors from scratch is a high-effort and time-consuming process in map and visualization design, transferring color from an inspiration source to maps and visualizations is a promising technique for both novices and experts. To date, existing image-to-image color transfer methods suffer from ambiguities and inconsistencies; no computational approach is available to transfer color from arbitrary images to vector maps. To fill this gap, we propose a computational method that transfers color from arbitrary images to a vector map. First, we classify reference images into regions with measures of saliency. Second, we quantify the communicative quality and esthetics of colors in maps; we then transform the problem of color transfer into a dual-objective, multiple-constraint optimization problem. We also present a solution method that can create a series of optimal color suggestions and generate a communicative quality-esthetic compromise solution. We compare our method with an image-to-image method based on two sample maps and six reference images. The results indicate that our method is adaptive to mapping scales, themes, and regions. The evaluation also provides preliminary evidence that our method can achieve better communicative quality and harmony. Numéro de notice : A2022-478 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/15230406.2021.1982009 Date de publication en ligne : 10/11/2021 En ligne : https://doi.org/10.1080/15230406.2021.1982009 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100826
in Cartography and Geographic Information Science > Vol 49 n° 4 (July 2022) . - pp 289 - 312[article]Learning from the past: crowd-driven active transfer learning for semantic segmentation of multi-temporal 3D point clouds / Michael Kölle in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
![]()
[article]
Titre : Learning from the past: crowd-driven active transfer learning for semantic segmentation of multi-temporal 3D point clouds Type de document : Article/Communication Auteurs : Michael Kölle, Auteur ; Volker Walter, Auteur ; Uwe Soergel, Auteur Année de publication : 2022 Article en page(s) : pp 259 - 266 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] données multitemporelles
[Termes IGN] orthoimage couleur
[Termes IGN] production participative
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement de données localiséesRésumé : (auteur) The main bottleneck of machine learning systems, such as convolutional neural networks, is the availability of labeled training data. Hence, much effort (and thus cost) is caused by setting up proper training data sets. However, models trained on specific data sets often perform unsatisfactorily when used to derive predictions for another (yet related) data set. We aim to overcome this problem by employing active learning to iteratively adapt an existing classifier to another domain. Precisely, we are concerned with semantic segmentation of 3D point clouds of multiple epochs. We first establish a Random Forest classifier for the first epoch of our data set and adapt it for successful prediction to two more temporally disjoint point clouds of the same but extended area. The point clouds, which are part of the newly introduced Hessigheim 3D benchmark data set, incorporate different characteristics with respect to the acquisition date and sensor configuration. We demonstrate that our workflow for domain adaptation is designed in such a way that it i) offers the possibility to greatly reduce labeling effort compared to a passive learning baseline or to an active learning baseline trained from scratch, if the domain gap is small enough and ii) at least does not cause more expenses (compared to a newly initialized active learning loop), if the domain gap is severe. The latter is especially beneficial in scenarios where the similarity of two different domains is hard to assess. Numéro de notice : A2022-435 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article DOI : 10.5194/isprs-annals-V-2-2022-259-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2022-259-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100743
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2022 (2022 edition) . - pp 259 - 266[article]Building detection with convolutional networks trained with transfer learning / Simon Šanca in Geodetski vestnik, vol 65 n° 4 (December 2021 - February 2022)
![]()
[article]
Titre : Building detection with convolutional networks trained with transfer learning Type de document : Article/Communication Auteurs : Simon Šanca, Auteur ; Krištof Oštir, Auteur ; Alen Mangafić, Auteur Année de publication : 2021 Article en page(s) : pp 559 - 576 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification automatique d'objets
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du bâti
[Termes IGN] données cadastrales
[Termes IGN] image aérienne
[Termes IGN] image infrarouge couleur
[Termes IGN] image proche infrarouge
[Termes IGN] image RVB
[Termes IGN] orthoimage couleur
[Termes IGN] segmentation d'image
[Termes IGN] SlovénieRésumé : (Auteur) Building footprint detection based on orthophotos can be used to update the building cadastre. In recent years deep learning methods using convolutional neural networks have been increasingly used around the world. We present an example of automatic building classification using our datasets made of colour near-infrared orthophotos (NIR-R-G) and colour orthophotos (R-G-B). Building detection using pretrained weights from two large scale datasets Microsoft Common Objects in Context (MS COCO) and ImageNet was performed and tested. We applied the Mask Region Convolutional Neural Network (Mask R-CNN) to detect the building footprints. The purpose of our research is to identify the applicability of pre-trained neural networks on the data of another colour space to build a classification model without re-learning. Numéro de notice : A2021-930 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.15292/geodetski-vestnik.2021.04.559-593 Date de publication en ligne : 03/11/2021 En ligne : https://doi.org/10.15292/geodetski-vestnik.2021.04.559-593 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99409
in Geodetski vestnik > vol 65 n° 4 (December 2021 - February 2022) . - pp 559 - 576[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 139-2021041 RAB Revue Centre de documentation En réserve L003 Disponible Detection of aspen in conifer-dominated boreal forests with seasonal multispectral drone image point clouds / Alwin A. Hardenbol in Silva fennica, vol 55 n° 4 (September 2021)
![]()
[article]
Titre : Detection of aspen in conifer-dominated boreal forests with seasonal multispectral drone image point clouds Type de document : Article/Communication Auteurs : Alwin A. Hardenbol, Auteur ; Anton Kuzmin, Auteur ; Lauri Korhonen, Auteur ; Pasi Korpelainen, Auteur ; Timo Kumpula, Auteur ; Matti Maltamo, Auteur ; Jari Kouki, Auteur Année de publication : 2021 Article en page(s) : n° 10515 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] aire protégée
[Termes IGN] analyse discriminante
[Termes IGN] Betula (genre)
[Termes IGN] détection d'arbres
[Termes IGN] forêt boréale
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] orthoimage couleur
[Termes IGN] peuplement mélangé
[Termes IGN] Picea abies
[Termes IGN] Pinus sylvestris
[Termes IGN] Populus tremula
[Termes IGN] semis de points
[Termes IGN] variation saisonnièreRésumé : (auteur) Current remote sensing methods can provide detailed tree species classification in boreal forests. However, classification studies have so far focused on the dominant tree species, with few studies on less frequent but ecologically important species. We aimed to separate European aspen (Populus tremula L.), a biodiversity-supporting tree species, from the more common species in European boreal forests (Pinus sylvestris L., Picea abies [L.] Karst., Betula spp.). Using multispectral drone images collected on five dates throughout one thermal growing season (May–September), we tested the optimal season for the acquisition of mono-temporal data. These images were collected from a mature, unmanaged forest. After conversion into photogrammetric point clouds, we segmented crowns manually and automatically and classified the species by linear discriminant analysis. The highest overall classification accuracy (95%) for the four species as well as the highest classification accuracy for aspen specifically (user’s accuracy of 97% and a producer’s accuracy of 96%) were obtained at the beginning of the thermal growing season (13 May) by manual segmentation. On 13 May, aspen had no leaves yet, unlike birches. In contrast, the lowest classification accuracy was achieved on 27 September during the autumn senescence period. This is potentially caused by high intraspecific variation in aspen autumn coloration but may also be related to our date of acquisition. Our findings indicate that multispectral drone images collected in spring can be used to locate and classify less frequent tree species highly accurately. The temporal variation in leaf and canopy appearance can alter the detection accuracy considerably. Numéro de notice : A2021-735 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.14214/sf.10515 Date de publication en ligne : 14/07/2021 En ligne : https://doi.org/10.14214/sf.10515 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98691
in Silva fennica > vol 55 n° 4 (September 2021) . - n° 10515[article]Production et mise à jour d’un produit BD Forêt V3 par apprentissage profond / Sébastien Giordano (2021)
PermalinkColor and texture interpolation between orthoimagery and vector data / Charlotte Hoarau in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol II-3 W5 (October 2015)
PermalinkOrthophotographie nocturne à haute résolution : la nuit, vue du ciel / Eva Frangiamone in Géomatique suisse, vol 112 n° 12 (décembre 2014)
PermalinkMapping fuels at the wildland-urban interface using colour ortho-images and Lidar data / Melissa F. Rosa in Geocarto international, vol 29 n° 5 - 6 (August - October 2014)
PermalinkSegmentation d'images aériennes par coopération LPE-régions et LPE-contours, application à la caractérisation de toitures / Youssef El Merabet in Revue Française de Photogrammétrie et de Télédétection, n° 206 (Avril 2014)
PermalinkComment naviguer entre photoréalisme et abstraction topographique en co-visualisant des cartes et des photos ? [diaporama] / Charlotte Hoarau (2014)
PermalinkFrom LiDAR data to forest representation on multi-scale / Freiderike Schwarzbach in Cartographic journal (the), vol 50 n° 1 (February 2013)
PermalinkAnalyse et traitement des données laser et images acquises sur le site de Saint-Siméon le stylite / Mariam Samaan (2012)
PermalinkDétection et identification de zones de végétation arborée: utilisation conjointe d'images satellite RapidEye et de données BDOrtho / François Tassin (2012)
PermalinkÉtude préalable aux relevés architecturaux par photogrammétrie de l’Alexandrie du XIXe et XXe [19e et 20e] siècle / Mehdi Daakir (2012)
Permalink