Descripteur
Documents disponibles dans cette catégorie (1415)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
![]()
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]Innovative techniques of photogrammetry for 3D modeling / Vicenzo Barrile in Applied geomatics, Vol 11 n° 4 (December 2019)
![]()
[article]
Titre : Innovative techniques of photogrammetry for 3D modeling Type de document : Article/Communication Auteurs : Vicenzo Barrile, Auteur ; Alice Pozzoli, Auteur ; Giuliana Bilotta, Auteur ; Antonino Fotia, Auteur Année de publication : 2019 Article en page(s) : pp 353–369 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie analytique
[Termes IGN] Italie
[Termes IGN] modèle non linéaire
[Termes IGN] modélisation 3D
[Termes IGN] orientation absolue
[Termes IGN] orientation automatique
[Termes IGN] orientation relative
[Termes IGN] Raspberry Pi
[Termes IGN] reconstruction d'image
[Termes IGN] structure-from-motion
[Termes IGN] vision par ordinateurRésumé : (auteur) This note presents the experimental results deriving from the application of two innovative photogrammetric techniques (with particular reference to non-conventional photogrammetric applications) for the production of time-space 3D models of the marine surface. Moreover, the first method (automatic three images processing (ATIP)) proposes some easy procedures to solve typical non-linear problems of analytical photogrammetry. In particular, once validated the technique of orientation of two images (two-step procedure based on two phases: relative orientation and absolute orientation, both characterized by non-linear functions), we propose a procedure for the automatic orientation of three images (the introduction of a third image allows avoiding human decision to find the final solution). The second method (Computer Vision Raspberry Pi—CVR) refers to the use of the “prompt” technique of computer vision (structure from motion) using five appropriately synchronized cameras to acquire simultaneously the various frames, thanks to the use of an acquisition system based on the use of Raspberry Pi. The experimentation was conducted both in the laboratory (on a model that allows to study a typical phenomenon of the Alpine Valtellina region, in the North of Italy) that directly at sea (on a portion of marine surface located in Reggio Calabria near the seafront). The results obtained show a substantial comparability of the results both between the two methods and with the actual data measured at sea with dedicated instrumentation. Numéro de notice : A2019-533 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s12518-019-00264-9 Date de publication en ligne : 22/05/2019 En ligne : https://doi.org/10.1007/s12518-019-00264-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94126
in Applied geomatics > Vol 11 n° 4 (December 2019) . - pp 353–369[article]Matching of TerraSAR-X derived ground control points to optical image patches using deep learning / Tatjana Bürgmann in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
![]()
[article]
Titre : Matching of TerraSAR-X derived ground control points to optical image patches using deep learning Type de document : Article/Communication Auteurs : Tatjana Bürgmann, Auteur ; Wolfgang Koppe, Auteur ; Michael Schmitt, Auteur Année de publication : 2019 Article en page(s) : pp 241 - 248 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] géolocalisation
[Termes IGN] image multicapteur
[Termes IGN] image optique
[Termes IGN] image Pléiades
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] image TerraSAR-X
[Termes IGN] point d'appuiRésumé : (auteur) High resolution synthetic aperture radar (SAR) satellites like TerraSAR-X are capable of acquiring images exhibiting an absolute geolocation accuracy within a few centimeters, mainly because of the availability of precise orbit information and by compensating range delay errors due to atmospheric conditions. In contrast, satellite images from optical missions generally exhibit comparably low geolocation accuracies because of the propagation of errors in angular measurements over large distances. However, a variety of remote sensing applications, such as change detection, surface movement monitoring or ice flow measurements, require precisely geo-referenced and co-registered satellite images. By using Ground Control Points (GCPs) derived from TerraSAR-X, the absolute geolocation accuracy of optical satellite images can be improved. For this purpose, the corresponding matching points in the optical images need to be localized. In this paper, a deep learning based approach is investigated for an automated matching of SAR-derived GCPs to optical image elements. Therefore, a convolutional neural network is pretrained with medium resolution Sentinel-1 and Sentinel-2 imagery and fine-tuned on precisely co-registered TerraSAR-X and Pléiades training image pairs to learn a common descriptor representation. By using these descriptors, the similarity of SAR and optical image patches can be calculated. This similarity metric is then used in a sliding window approach to identify the matching points in the optical reference image. Subsequently, the derived points can be utilized for co-registration of the underlying images. The network is evaluated over nine study areas showing airports and their rural surroundings from several different countries around the world. The results show that based on TerraSAR-X-derived GCPs, corresponding points in the optical image can automatically and reliably be identified with a pixel-level localization accuracy. Numéro de notice : A2019-548 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.010 Date de publication en ligne : 05/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94194
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 241 - 248[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Un modèle de transcription pour identifier et analyser les objets de référence et les relations spatiales utilisées pour se localiser en montagne / Mattia Bunel in Cartes & Géomatique, n° 241-242 (décembre 2019)
[article]
Titre : Un modèle de transcription pour identifier et analyser les objets de référence et les relations spatiales utilisées pour se localiser en montagne Type de document : Article/Communication Auteurs : Mattia Bunel , Auteur ; Cécile Duchêne
, Auteur ; Ana-Maria Olteanu-Raimond
, Auteur ; Marlène Villanova-Oliver, Auteur ; Grégoire Bonhoure, Auteur ; Tiphaine Jouan, Auteur
Année de publication : 2019 Projets : CHOUCAS / Olteanu-Raimond, Ana-Maria Conférence : ICC 2019, 29th International Cartographic Conference ICA, Mapping everything for everyone 15/07/2019 20/07/2019 Tokyo Japon Open Access Proceedings of the ICA Article en page(s) : pp 107 - 115 Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] croquis topographique
[Termes IGN] géoréférencement indirect
[Termes IGN] montagne
[Termes IGN] objet géographique
[Termes IGN] relation spatiale
[Termes IGN] représentation des connaissances
[Termes IGN] secours d'urgence
[Termes IGN] transcriptionRésumé : (auteur) Le projet CHOUCAS vise à aider les secours en montagne à localiser des victimes décrivant leur position à l'aide de relations spatiales et d'objets géographiques. Dans ce contexte, l'étude présentée dans cet article vise à mieux comprendre les objets de référence et les relations spatiales utilisés pour décrire une position dans un contexte montagneux, dans le but de concevoir des outils pour aider les sauveteurs. Des enregistrements d'appels d'urgence ont été utilisés comme matériel de départ. Le cœur du travail consiste à concevoir un modèle pour transcrire les informations de localisation contenues dans ces appels tout en les structurant. Une première analyse des appels transcrits montre que les relations spatiales statiques projectives ou directionnelles sont les plus utilisées et qu'une classification plus fine des objets de référence et des relations spatiales est nécessaire. Afin de présenter de manière synthétique les informations de localisation contenues dans un appel, une représentation supplémentaire au moyen d'une sketch map (carte schématisée) avec une symbolisation dédiée est proposée. Numéro de notice : A2019-654 Affiliation des auteurs : LASTIG COGIT+Ext (2012-2019) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97839
in Cartes & Géomatique > n° 241-242 (décembre 2019) . - pp 107 - 115[article]Réservation
Réserver ce documentExemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 021-2019022 SL Revue Centre de documentation Revues en salle Disponible 021-2019021 SL Revue Centre de documentation Revues en salle Disponible Comparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using Landsat images / Cheolhee Yoo in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)
![]()
[article]
Titre : Comparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using Landsat images Type de document : Article/Communication Auteurs : Cheolhee Yoo, Auteur ; Daehyeon Han, Auteur ; Jungho Im, Auteur ; Benjamin Bechtel, Auteur Année de publication : 2019 Article en page(s) : pp 155 - 170 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] Chicago (Illinois)
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] climat urbain
[Termes IGN] Hong-Kong
[Termes IGN] ilot thermique urbain
[Termes IGN] image Landsat-8
[Termes IGN] Madrid (Espagne)
[Termes IGN] Rome
[Termes IGN] World Urban Database and Access Portal Tools
[Termes IGN] zone urbaine denseRésumé : (Auteur) The Local Climate Zone (LCZ) scheme is a classification system providing a standardization framework to present the characteristics of urban forms and functions, especially for urban heat island (UHI) research. Landsat-based 100 m resolution LCZ maps have been classified by the World Urban Database and Portal Tool (WUDAPT) method using a random forest (RF) machine learning classifier. Some studies have proposed modified RF and convolutional neural network (CNN) approaches. This study aims to compare CNN with an RF classifier for LCZ mapping in great detail. We designed five schemes (three RF-based schemes (S1–S3) and two CNN-based ones (S4–S5)), which consist of various combinations of input features from bitemporal Landsat 8 data over four global mega cities: Rome, Hong Kong, Madrid, and Chicago. Among the five schemes, the CNN-based one with the incorporation of a larger neighborhood information showed the best classification performance. When compared to the WUDAPT workflow, the overall accuracies for entire land cover classes (OA) and for urban LCZ types (i.e., LCZ1-10; OAurb) increased by about 6–8% and 10–13%, respectively, for the four cities. The transferability of LCZ models for the four cities were evaluated, showing that CNN consistently resulted in higher accuracy (increased by about 7–18% and 18–29% for OA and OAurb, respectively) than RF. This study revealed that the CNN classifier classified particularly well for the specific LCZ classes in which buildings were mixed with trees or buildings or plants were sparsely distributed. The research findings can provide a basis for guidance of future LCZ classification using deep learning. Numéro de notice : A2019-495 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.009 Date de publication en ligne : 19/09/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93728
in ISPRS Journal of photogrammetry and remote sensing > vol 157 (November 2019) . - pp 155 - 170[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019113 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Context pyramidal network for stereo matching regularized by disparity gradients / Junhua Kang in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)
PermalinkDeep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery / Yuri Shendryk in ISPRS Journal of photogrammetry and remote sensing, vol 157 (November 2019)
PermalinkA double-strategy-check active learning algorithm for hyperspectral image classification / Ying Cui in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 11 (November 2019)
PermalinkImmigration and future housing needs in Switzerland: Agent-based modelling of agglomeration Lausanne / Marcello Marini in Computers, Environment and Urban Systems, vol 78 (November 2019)
PermalinkImpact of network constraining on the terrestrial reference frame realization based on SLR observations to LAGEOS / Radoslaw Zajdel in Journal of geodesy, vol 93 n°11 (November 2019)
PermalinkSig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
PermalinkCombining machine learning and compact polarimetry for estimating soil moisture from C-Band SAR data / Emanuele Santi in Remote sensing, Vol 11 n° 20 (October-2 2019)
PermalinkComparative analysis of the accuracy of surface soil moisture estimation from the C- and L-bands / Mohammad El Hajj in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)
PermalinkA machine learning approach to detect crude oil contamination in a real scenario using hyperspectral remote sensing / Ran Pelta in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)
PermalinkMapping dead forest cover using a deep convolutional neural network and digital aerial photography / Jean-Daniel Sylvain in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)
Permalink