Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet
analyse d'image orientée objetVoir aussi |
Documents disponibles dans cette catégorie (214)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning / Clément Dechesne in Remote sensing, Vol 11 n° 24 (December-2 2019)
[article]
Titre : Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning Type de document : Article/Communication Auteurs : Clément Dechesne , Auteur ; Sébastien Lefèvre, Auteur ; Rodolphe Vadaine, Auteur ; Guillaume Hajduch, Auteur ; Ronan Fablet, Auteur Année de publication : 2019 Projets : SESAME / Fablet, Ronan Article en page(s) : n° 2997 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image Sentinel-SAR
[Termes IGN] navire
[Termes IGN] objet mobileRésumé : (auteur) The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m). Numéro de notice : A2019-632 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs11242997 Date de publication en ligne : 13/12/2019 En ligne : https://doi.org/10.3390/rs11242997 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95325
in Remote sensing > Vol 11 n° 24 (December-2 2019) . - n° 2997[article]Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]A two-scale approach for estimating forest aboveground biomass with optical remote sensing images in a subtropical forest of Nepal / Upama A. Koju in Journal of Forestry Research, vol 30 n° 6 (December 2019)
[article]
Titre : A two-scale approach for estimating forest aboveground biomass with optical remote sensing images in a subtropical forest of Nepal Type de document : Article/Communication Auteurs : Upama A. Koju, Auteur ; Jiahua Zhang, Auteur ; Shashish Maharjan, Auteur ; Sha Zhang, Auteur ; Yun Bai, Auteur ; Dinesh Babu Irulappa-Pillai-Vijayakumar , Auteur ; Fengmei Yao, Auteur Année de publication : 2019 Projets : 3-projet - voir note / Fablet, Ronan Article en page(s) : pp 2119 - 2136 Note générale : bibliographie
The work was supported by the CAS Strategic Priority Research Program (No. XDA19030402), the National Key Research and Development Program of China (No. 2016YFD0300101), the Natural Science Foundation of China (Nos. 31571565, 31671585), the Key Basic Research Project of the Shandong Natural Science Foundation of China (No. ZR2017ZB0422), and Research Funding of Qingdao University (No. 41117010153).Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse multiéchelle
[Termes IGN] biomasse aérienne
[Termes IGN] biomasse forestière
[Termes IGN] Google Earth
[Termes IGN] image Geoeye
[Termes IGN] image Landsat
[Termes IGN] image optique
[Termes IGN] image Quickbird
[Termes IGN] NépalRésumé : (auteur) Forests account for 80% of the total carbon exchange between the atmosphere and terrestrial ecosystems. Thus, to better manage our responses to global warming, it is important to monitor and assess forest aboveground carbon and forest aboveground biomass (FAGB). Different levels of detail are needed to estimate FAGB at local, regional and national scales. Multi-scale remote sensing analysis from high, medium and coarse spatial resolution data, along with field sampling, is one approach often used. However, the methods developed are still time consuming, expensive, and inconvenient for systematic monitoring, especially for developing countries, as they require vast numbers of field samples for upscaling. Here, we recommend a convenient two-scale approach to estimate FAGB that was tested in our study sites. The study was conducted in the Chitwan district of Nepal using GeoEye-1 (0.5 m), Landsat (30 m) and Google Earth very high resolution (GEVHR) Quickbird (0.65 m) images. For the local scale (Kayerkhola watershed), tree crowns of the area were delineated by the object-based image analysis technique on GeoEye images. An overall accuracy of 83% was obtained in the delineation of tree canopy cover (TCC) per plot. A TCC vs. FAGB model was developed based on the TCC estimations from GeoEye and FAGB measurements from field sample plots. A coefficient of determination (R2) of 0.76 was obtained in the modelling, and a value of 0.83 was obtained in the validation of the model. To upscale FAGB to the entire district, open source GEVHR images were used as virtual field plots. We delineated their TCC values and then calculated FAGB based on a TCC versus FAGB model. Using the multivariate adaptive regression splines machine learning algorithm, we developed a model from the relationship between the FAGB of GEVHR virtual plots with predictor parameters from Landsat 8 bands and vegetation indices. The model was then used to extrapolate FAGB to the entire district. This approach considerably reduced the need for field data and commercial very high resolution imagery while achieving two-scale forest information and FAGB estimates at high resolution (30 m) and accuracy (R2 = 0.76 and 0.7) with minimal error (RMSE = 64 and 38 tons ha−1) at local and regional scales. This methodology is a promising technique for cost-effective FAGB and carbon estimations and can be replicated with limited resources and time. The method is especially applicable for developing countries that have low budgets for carbon estimations, and it is also applicable to the Reducing Emissions from Deforestation and Forest Degradation (REDD +) monitoring reporting and verification processes. Numéro de notice : A2019-664 Affiliation des auteurs : LIF+Ext (2012-2019) Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11676-018-0743-1 Date de publication en ligne : 09/07/2018 En ligne : https://doi.org/10.1007/s11676-018-0743-1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99699
in Journal of Forestry Research > vol 30 n° 6 (December 2019) . - pp 2119 - 2136[article]Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)
[article]
Titre : Sig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery Type de document : Article/Communication Auteurs : Ruchan Dong, Auteur ; Dazhuan Xu, Auteur ; Jin Zhao, Auteur ; Licheng Jiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 8534 - 8545 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] régression
[Termes IGN] zone d'intérêtRésumé : (auteur) Small target detection is a challenging task in veryhigh-resolution (VHR) optical remote sensing imagery, because small targets occupy a minuscule number of pixels and are easily disturbed by backgrounds or occluded by others. Although current convolutional neural network (CNN)-based approaches perform well when detecting normal objects, they are barely suitable for detecting small ones. Two practical problems stand in their way. First, current CNN-based approaches are not specifically designed for the minuscule size of small targets (~15 or ~10 pixels in extent). Second, no well-established data sets include labeled small targets and establishing one from scratch is labor-intensive and time-consuming. To address these two issues, we propose an approach that combines Sig-NMS-based Faster R-CNN with transfer learning. Sig-NMS replaces traditional non-maximum suppression (NMS) in the stage of region proposal network and decreases the possibility of missing small targets. Transfer learning can effectively label remote sensing images by automatically annotating both object classes and object locations. We conduct an experiment on three data sets of VHR optical remote sensing images, RSOD, LEVIR, and NWPU VHR-10, to validate our approach. The results demonstrate that the proposed approach can effectively detect small targets in the VHR optical remote sensing images of about 10 × 10 pixels and automatically label small targets as well. In addition, our method presents better mean average precisions than other state-of-the-art methods: 1.5% higher when performing on the RSOD data set, 17.8% higher on the LEVIR data set, and 3.8% higher on NWPU VHR-10. Numéro de notice : A2019-595 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2921396 Date de publication en ligne : 15/07/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2921396 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94587
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 11 (November 2019) . - pp 8534 - 8545[article]Optimal segmentation of high spatial resolution images for the classification of buildings using random forests / James Bialas in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)
[article]
Titre : Optimal segmentation of high spatial resolution images for the classification of buildings using random forests Type de document : Article/Communication Auteurs : James Bialas, Auteur ; Thomas Oommen, Auteur ; Timothy C. Havens, Auteur Année de publication : 2019 Article en page(s) : pp Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] bâtiment
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] dommage matériel
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] Nouvelle-Zélande
[Termes IGN] précision de la classification
[Termes IGN] qualité du processus
[Termes IGN] segmentation d'image
[Termes IGN] séisme
[Termes IGN] zone urbaineRésumé : (auteur) In the application of machine learning to geographic object based image analysis, several parameters influence overall classifier performance. One of the first parameters is segmentation size—for example, how many pixels should be grouped together to form an image object. Often, trial and error methods are used to obtain segmentation parameters that best delineate the borders of real world objects. Several attempts at automated methods have produced promising results, but manual intervention is still necessary. Meanwhile, numerous measures of segmentation quality have been defined, but their relationship to classifier performance is not then directly shown. For example, as measures of segmentation quality improve, do classification results improve as well? Our work considers the problem of building classification in high resolution aerial imagery of urban areas. Based on user defined training polygons generated with or without a reference segmentation, we have found several measures of segmentation quality and feature performance that can help users narrow the range of appropriate segmentations. Furthermore, our work finds that given this range, performance of machine learning algorithms remains relatively constant for any given segmentation as long as features used for classification are chosen correctly. We find that the range of scale parameters capable of producing an accurate classification is much broader than typically assumed and trial and error methods for finding this parameter may be an acceptable approach. Numéro de notice : A2019-472 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.jag.2019.06.005 Date de publication en ligne : 08/06/2019 En ligne : https://doi.org/https://doi.org/10.1016/j.jag.2019.06.005 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93632
in International journal of applied Earth observation and geoinformation > vol 82 (October 2019) . - pp[article]Saliency-guided deep neural networks for SAR image change detection / Jie Geng in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)PermalinkScene context-driven vehicle detection in high-resolution aerial images / Chao Tao in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)PermalinkMapping of forest tree distribution and estimation of forest biodiversity using Sentinel-2 imagery in the University Research Forest Taxiarchis in Chalkidiki, Greece / Maria Kampouri in Geocarto international, vol 34 n° 12 ([15/09/2019])PermalinkPartial linear NMF-based unmixing methods for detection and area estimation of photovoltaic panels in urban hyperspectral remote sensing data / Moussa Sofiane Karoui in Remote sensing, vol 11 n° 18 (September 2019)PermalinkDelineation of vacant building land using orthophoto and lidar data object classification / Dejan Jenko in Geodetski vestnik, vol 63 n° 3 (September - November 2019)PermalinkDetecting and mapping traffic signs from Google Street View images using deep learning and GIS / Andrew Campbell in Computers, Environment and Urban Systems, vol 77 (september 2019)PermalinkDevelopment and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from UAV-based thermal infrared imagery / Mehdi Khoshboresh Masouleh in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)PermalinkExploring the synergy between Landsat and ASAR towards improving thematic mapping accuracy of optical EO data / Alexander Cass in Applied geomatics, vol 11 n° 3 (September 2019)PermalinkIntegration of LiDAR and multispectral images for rapid exposure and earthquake vulnerability estimation. Application in Lorca, Spain / Yolanda Torres in International journal of applied Earth observation and geoinformation, vol 81 (September 2019)PermalinkValidating the use of object-based image analysis to map commonly recognized landform features in the United States / Samantha T. Arundel in Cartography and Geographic Information Science, Vol 46 n° 5 (September 2019)Permalink