Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet
analyse d'image orientée objetVoir aussi |
Documents disponibles dans cette catégorie (445)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : Applications of remote sensing in coastal areas Type de document : Monographie Auteurs : Konstantinos Topouzelis, Éditeur scientifique ; Apostolos Papakonstantinou, Éditeur scientifique ; Siman Singha, Éditeur scientifique ; et al., Auteur Editeur : Bâle [Suisse] : Multidisciplinary Digital Publishing Institute MDPI Année de publication : 2020 Importance : 288 p. Format : 16 x 23 cm ISBN/ISSN/EAN : 978-3-03928-659-1 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification orientée objet
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] classification pixellaire
[Termes IGN] détection de contours
[Termes IGN] données lidar
[Termes IGN] érosion côtière
[Termes IGN] falaise
[Termes IGN] habitat (nature)
[Termes IGN] herbier marin
[Termes IGN] image PlanetScope
[Termes IGN] modèle numérique de surface
[Termes IGN] surveillance du littoralRésumé : (éditeur) Coastal areas are remarkable regions with high spatiotemporal variability. A large population is affected by their physical and biological processes—resulting from effects on tourism to biodiversity and productivity. Coastal ecosystems perform several critical ecosystem services and functions, such as water oxygenation and nutrients provision, seafloor and beach stabilization (as sediment is controlled and trapped within the rhizomes of the seagrass meadows), carbon burial, as areas for nursery, and as refuge for several commercial and endemic species. Knowledge of the spatial distribution of marine habitats is prerequisite information for the conservation and sustainable use of marine resources. Remote sensing from UAVs to spaceborne sensors is offering a unique opportunity to measure, analyze, quantify, map, and explore the processes on the coastal areas at high temporal frequencies. This Special Issue on “Application of Remote Sensing in Coastal Areas” is specifically addresses those successful applications—from local to regional scale—in coastal environments related to ecosystem productivity, biodiversity, sea level rise. Note de contenu : 1- Monitoring cliff erosion with LiDAR surveys and Bayesian network-based data analysis
2- Cubesats allow high spatiotemporal estimates of satellite-derived bathymetry
3- Comparison of Pixel- and object-based classification methods of unmanned aerial vehicle data applied to coastal dune vegetation communities: Casal Borsetti case stud
4- Capturing coastal dune natural vegetation types using a phenology-based mapping approach: The potential of Sentinel-2
5- Sub-pixel waterline extraction: Characterising accuracy and sensitivity to indices and spectra
6- Satellite observations of wind wake and associated oceanic thermal responses: A case study of Hainan Island wind wake
7- Comparison of true-color and multispectral unmanned aerial systems imagery for marine habitat mapping using object-based image analysis
8- Spatial and temporal variability of open-ocean barrier islands along the Indus Delta region
9- Characterizing and monitoring ground settlement of marine reclamation land of Xiamen New Airport, China with Sentinel-1 SAR datasets
10- Deriving high spatial-resolution coastal topography from sub-meter satellite stereo imagery
11- Photon-counting Lidar: An adaptive signal detection method for different land cover types in coastal area
12- Automatic semi-global artificial shoreline subpixel localization algorithm for Landsat imagery
13- Analysis of ship detection performance with full-, compact- and dual-polarimetric SAR
14- Sea ice extent detection in the Bohai Sea using Sentinel-3 OLCI dataNuméro de notice : 28689 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Recueil / ouvrage collectif DOI : 10.3390/books978-3-03928-659-1 En ligne : https://doi.org/10.3390/books978-3-03928-659-1 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100128 Cattle detection and counting in UAV images based on convolutional neural networks / Wen Shao in International Journal of Remote Sensing IJRS, vol 41 n° 1 (01 - 08 janvier 2020)
[article]
Titre : Cattle detection and counting in UAV images based on convolutional neural networks Type de document : Article/Communication Auteurs : Wen Shao, Auteur ; Rei Kawakami, Auteur ; Ryota Yoshihashi, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 31 - 52 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] bovin
[Termes IGN] chevauchement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] modélisation 3DRésumé : (auteur) For assistance with grazing cattle management, we propose a cattle detection and counting system based on Convolutional Neural Networks (CNNs) using aerial images taken by an Unmanned Aerial Vehicle (UAV). To improve detection performance, we take advantage of the fact that, with UAV images, the approximate size of the objects can be predicted when the UAV’s height from the ground can be assumed to be roughly constant. We resize an image to be fed into the CNN to an optimum resolution determined by the object size and the down-sampling rate of the network, both in training and testing. To avoid repetition of counting in images that have large overlaps to adjacent ones and to obtain the accurate number of cattle in an entire area, we utilize a three-dimensional model reconstructed by the UAV images for merging the detection results of the same target. Experiments show that detection performance is greatly improved when using the optimum input resolution with an F-measure of 0.952, and counting results are close to the ground truths when the movement of cattle is approximately stationary compared to that of the UAV’s. Numéro de notice : A2020-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/01431161.2019.1624858 Date de publication en ligne : 11/06/2019 En ligne : https://doi.org/10.1080/01431161.2019.1624858 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94891
in International Journal of Remote Sensing IJRS > vol 41 n° 1 (01 - 08 janvier 2020) . - pp 31 - 52[article]Classification of poplar trees with object-based ensemble learning algorithms using Sentinel-2A imagery / H. Tombul in Journal of geodetic science, vol 10 n° 1 (January 2020)
[article]
Titre : Classification of poplar trees with object-based ensemble learning algorithms using Sentinel-2A imagery Type de document : Article/Communication Auteurs : H. Tombul, Auteur ; Ismail Colkesen, Auteur ; Taskin Kavzoglu, Auteur Année de publication : 2020 Article en page(s) : pp 14 - 22 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme d'apprentissage
[Termes IGN] analyse canonique
[Termes IGN] analyse comparative
[Termes IGN] bande spectrale
[Termes IGN] boosting adapté
[Termes IGN] carte de la végétation
[Termes IGN] carte thématique
[Termes IGN] classification orientée objet
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] image Sentinel-MSI
[Termes IGN] jeu de données
[Termes IGN] Populus (genre)
[Termes IGN] précision de la classification
[Termes IGN] Rotation Forest classification
[Termes IGN] segmentation multi-échelle
[Termes IGN] TurquieRésumé : (auteur) The poplar species in the forest ecosystems are one of the most valuable and beneficial species for the society and environment. Conventional methods require high cost, time and labor need, and the results obtained vary and are insu˚cient in terms of achieved accuracy level. Determination of poplar cultivated fields and mapping of their spatial sites play a vital role for decision-makers and planners to enhance the economic and ecological value of poplar trees. The study aims to map Poplar (P. deltoides) cultivated areas in Akyazi district of Sakarya, Turkey province using various combinations of the Sentinel-2A image bands. For this purpose, object-based classification based on multi-resolution segmentation algorithm was utilized to produce image objects and ensemble learning algorithms, namely, Adaboost (AdaB), Random Forest (RF), Rotation Forest (RotFor) and Canonical correlation forest (CCF) were applied to produce thematic maps. In order to analyze the effects of the spectral bands of the Sentinel-2A image on the object-based classification performance, three datasets consisting of different spectral band combinations (i.e. four 10 m bands, six 20 m bands and ten 10m pan-sharpened bands) were used. The results showed that the RotFor and CCF classifiers produced superior classification performances compared to the AdaB and RF classifiers for the band combinations regarded in this study. Moreover, it was found that determination of poplar tree class level accuracy reached to ~94% in terms of F-score. It was also observed that the inclusion of the six spectral bands at 20 m resolution resulted in a noteworthy increase in classification accuracy (up to 6%) compared to single 10m band combination. Numéro de notice : A2020-420 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1515/jogs-2020-0003 Date de publication en ligne : 04/05/2020 En ligne : https://doi.org/10.1515/jogs-2020-0003 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95477
in Journal of geodetic science > vol 10 n° 1 (January 2020) . - pp 14 - 22[article]Context-aware convolutional neural network for object detection in VHR remote sensing imagery / Yiping Gong in IEEE Transactions on geoscience and remote sensing, vol 58 n° 1 (January 2020)
[article]
Titre : Context-aware convolutional neural network for object detection in VHR remote sensing imagery Type de document : Article/Communication Auteurs : Yiping Gong, Auteur ; Zhifeng Xiao, Auteur ; Xiaowei Tan, Auteur Année de publication : 2020 Article en page(s) : pp 34 - 44 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] description multiniveau
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] prise en compte du contexte
[Termes IGN] vision par ordinateur
[Termes IGN] zone d'intérêtRésumé : (auteur) Object detection in very-high-resolution (VHR) remote sensing imagery remains a challenge. Environmental factors, such as illumination intensity and weather, reduce image quality, resulting in poor feature representation and limited detection accuracy. To enrich the feature representation and mine the underlying context information among objects, this article proposes a context-aware convolutional neural network (CA-CNN) model for object detection that includes proposal generation, context feature extraction, feature fusion, and classification. During feature extraction, we propose integrating a context-regions-of-interests (Context-RoIs) mining layer into the CNN model and extracting context features by mapping Context-RoIs mined from the foreground proposals to multilevel feature maps. Finally, the context features extracted from multilevel layers are fused into a single layer, and the proposals represented by the fused features are classified by a softmax classifier. In this article, through numerous experiments, we thoroughly explore the influence of key factors, such as Context-RoIs, different feature scales, and different spatial context window sizes. Because of the end-to-end network design approach, our proposed model simultaneously maintains high efficiency and effectiveness. We conducted all model testing on the public NWPU VHR-10 data set. The experimental results demonstrate that our proposed CA-CNN model achieves significantly improved model performance and better detection results compared with the state-of-the-art methods. Numéro de notice : A2020-038 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2930246 Date de publication en ligne : 23/09/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2930246 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94492
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 1 (January 2020) . - pp 34 - 44[article]
Titre : Convolutional Neural Networks for embedded vision Titre original : Réseaux de neurones CNN pour la vision embarquée Type de document : Thèse/HDR Auteurs : Lucas Fernandez Brillet, Auteur ; Stéphane Mancini, Directeur de thèse Editeur : Grenoble [France] : Université Grenoble Alpes Année de publication : 2020 Importance : 164 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse pour obtenir le grade de Docteur de l'Université Grenoble Alpes, Spécialité : Mathématiques, sciences et technologies de
l’information, informatiqueLangues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse en composantes principales
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compression d'image
[Termes IGN] détection d'objet
[Termes IGN] image à haute résolution
[Termes IGN] instrument embarqué
[Termes IGN] vision par ordinateur
[Termes IGN] zone d'intérêtIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Recently, Convolutional Neural Networks have become the state-of-the-art soluion(SOA) to most computer vision problems. In order to achieve high accuracy rates, CNNs require a high parameter count, as well as a high number of operations. This greatly complicates the deployment of such solutions in embedded systems, which strive to reduce memory size. Indeed, while most embedded systems are typically in the range of a few KBytes of memory, CNN models from the SOA usually account for multiple MBytes, or even GBytes in model size. Throughout this thesis, multiple novel ideas allowing to ease this issue are proposed. This requires to jointly design the solution across three main axes: Application, Algorithm and Hardware.In this manuscript, the main levers allowing to tailor computational complexity of a generic CNN-based object detector are identified and studied. Since object detection requires scanning every possible location and scale across an image through a fixed-input CNN classifier, the number of operations quickly grows for high-resolution images. In order to perform object detection in an efficient way, the detection process is divided into two stages. The first stage involves a region proposal network which allows to trade-off recall for the number of operations required to perform the search, as well as the number of regions passed on to the next stage. Techniques such as bounding box regression also greatly help reduce the dimension of the search space. This in turn simplifies the second stage, since it allows to reduce the task’s complexity to the set of possible proposals. Therefore, parameter counts can greatly be reduced.Furthermore, CNNs also exhibit properties that confirm their over-dimensionment. This over-dimensionement is one of the key success factors of CNNs in practice, since it eases the optimization process by allowing a large set of equivalent solutions. However, this also greatly increases computational complexity, and therefore complicates deploying the inference stage of these algorithms on embedded systems. In order to ease this problem, we propose a CNN compression method which is based on Principal Component Analysis (PCA). PCA allows to find, for each layer of the network independently, a new representation of the set of learned filters by expressing them in a more appropriate PCA basis. This PCA basis is hierarchical, meaning that basis terms are ordered by importance, and by removing the least important basis terms, it is possible to optimally trade-off approximation error for parameter count. Through this method, it is possible to compress, for example, a ResNet-32 network by a factor of ×2 both in the number of parameters and operations with a loss of accuracy Note de contenu : Introduction
1- Deep learning overview
2- Methodology to adapt the computational complexity of CNN-based object detection for efficient inference in an applicative use-case
3- CNN compression
4- Cascaded and compressed CNNs for fast and lightweight face detection
5- Hardware evaluation on embedded multiprocessor
Thesis Conclusion & PerspectivesNuméro de notice : 28392 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Mathématiques, sciences et technologies de l’information, informatique : Grenoble : 2020 DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-03101523/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98739 PermalinkDétection et vectorisation automatiqued’objets linéaires dans des nuages de points de voirie / Etienne Barçon (2020)PermalinkPermalinkIdentification of alpine glaciers in the central Himalayas using fully polarimetric L-Band SAR data / Guo-Hui Yao in IEEE Transactions on geoscience and remote sensing, vol 58 n° 1 (January 2020)PermalinkImage processing applications in object detection and graph matching: from Matlab development to GPU framework / Beibei Cui (2020)PermalinkPermalinkPermalinkRecherche multimodale d'images aériennes multi-date à l'aide d'un réseau siamois / Margarita Khokhlova (2020)PermalinkReconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles (2020)PermalinkPermalinkPermalinkSatellite image time series classification with pixel-set encoders and temporal self-attention / Vivien Sainte Fare Garnot (2020)PermalinkSimulation d’éclairements des surfaces ombrées en zone urbaine par transfert radiatif 3D (modèle DART) / Yulu Xi (2020)PermalinkSUMAC'20 : Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia heritAge Contents / Valérie Gouet-Brunet (2020)PermalinkPermalinkShip identification and characterization in Sentinel-1 SAR images with multi-task deep learning / Clément Dechesne in Remote sensing, Vol 11 n° 24 (December-2 2019)PermalinkHalf a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)PermalinkA two-scale approach for estimating forest aboveground biomass with optical remote sensing images in a subtropical forest of Nepal / Upama A. Koju in Journal of Forestry Research, vol 30 n° 6 (December 2019)PermalinkSig-NMS-based faster R-CNN combining transfer learning for small target detection in VHR optical remote sensing imagery / Ruchan Dong in IEEE Transactions on geoscience and remote sensing, vol 57 n° 11 (November 2019)PermalinkOptimal segmentation of high spatial resolution images for the classification of buildings using random forests / James Bialas in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)Permalink