Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (833)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Attributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision / Anis Amziane (2022)
Titre : Attributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision Type de document : Thèse/HDR Auteurs : Anis Amziane, Auteur ; Ludovic Macaire, Directeur de thèse Editeur : Lille : Université de Lille Année de publication : 2022 Importance : 214 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse pour obtenir le grade de Docteur de l'Université de Lille, spécialité Automatique, Génie Informatique, Traitement du Signal et des ImagesLangues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] agriculture de précision
[Termes IGN] bande spectrale
[Termes IGN] classification dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] éclairage
[Termes IGN] exitance spectrale
[Termes IGN] extraction de la végétation
[Termes IGN] rayonnement proche infrarouge
[Termes IGN] reconnaissance d'objets
[Termes IGN] réflectance végétale
[Termes IGN] signature spectraleIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The main objective of this work is to develop an automatic recognition system of crop and weed plants in field conditions. In Chapter 2 we describe the formation of multispectral radiance images under the Lambertian surface assumption and the different devices that can be used to acquire such images. We then provide a detailed description of the multispectral camera used in this study. Because radiance multispectral images are acquired under varying illumination, we propose an original multispectral image formation model that takes the variation of illumination conditions into account. In chapter 3, we estimate the reflectance as an illumination-invariant spectral signature. First, we present state-of-the-art methods that can be used to estimate the reflectance from multispectral images. We then introduce the reference state-of-the-art method for reflectance estimation and de- scribe our proposed method for reflectance estimation under varying illumination. Chapter 4 focuses on estimated reflectance assessment. The quality of reflectance estimated by our method is evaluated against state-of-the-art methods, and its contribution to supervised crop/weed recognition is demonstrated. Chapter 5 addresses the dimension reduction issue. The acquired multispectral images are composed of a high number of spectral channels, whose analysis is memory and time consuming. Moreover, spectral bands associated to these channels may be redundant or contain highly correlated spectral information. Therefore, we select the best spectral bands for crop/weed classification and use them to specify a camera suited for crop/weed recognition.Chapter 6 deals with the problem of spatio-spectral feature extraction from multispectral images. We propose an approach that extracts both spatial and spectral information at reduced computation costs based on a CNN. Its contribution to crop/weed recognition is demonstrated. Note de contenu : 1- Introduction
2- Multispectral imaging
3- Reflectance estimation
4- Reflectance estimation assessment
5- dimension reduction
6- Raw textures features for crop/weed recognition
ConclusionNuméro de notice : 24102 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Organisme de stage : Laboratoire Cristal (Lille) DOI : sans En ligne : https://www.theses.fr/2022ULILB020 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102577 Contribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)
Titre : Contribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery Type de document : Thèse/HDR Auteurs : Calimanut-Ionut Cira, Auteur Editeur : Madrid [Espagne] : Universidad politécnica de Madrid Année de publication : 2022 Importance : 227 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat en Topographie, Géodésie et cartographie, Universidad politécnica de MadridLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction du réseau routier
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal artificiel
[Termes IGN] route
[Termes IGN] segmentation sémantiqueIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. Remote sensing experts have been actively using deep neural networks to solve object extraction tasks in high-resolution aerial imagery by means of supervised operations. However, the extraction operation is imperfect, due to the nature of remotely sensed images (noise, obstructions, etc.), the limitations of sensing resolution, or the occlusions often present in the scenes. The road network plays an important part in transportation and, nowadays, one of the main related challenges is keeping the existent cartographic support up to date. This task can be considered very challenging due to the complex nature of the geospatial object (continuous, with irregular geometry, and significant differences in width). We also need to take into account that secondary roads represent the largest part of the road transport network, but due to the absence of clearly defined edges, and the different spectral signatures of the materials used for pavement, monitoring, and mapping them represents a great effort for public administration, and their extraction is often omitted altogether. We believe that recent advancements in machine vision can enable a successful extraction of the road structures from high-resolution, remotely sensed imagery and a greater automation of the road mapping operation. In this PhD thesis, we leverage recent computer vision advances and propose a deep learning-based end-to-end solution, capable of efficiently extracting the surface area of roads at a large scale. The novel approach is based on a disjoint execution of three different image processing operations (recognition, semantic segmentation, and post-processing with conditional generative learning) within a common framework. We focused on improving the state-of-the-art results for each of the mentioned components, before incorporating the resulting models into the proposed solution architecture. For the recognition operation, we proposed two framework candidates based on convolutional neural networks to classify roads in openly available aerial orthoimages divided in tiles of 256×256 pixels, with a spatial resolution of 0.5 m. The frameworks are based on ensemble learning and transfer learning and combine weak classifiers to leverage the strengths of different state-of-the-art models that we heavily modified for computational efficiency. We evaluated their performance on unseen test data and compared the results with those obtained by the state-of-the-art convolutional neural networks trained for the same task, observing improvements in performance metrics of 2-3%. Secondly, we implemented hybrid semantic segmentation models (where the default backbones are replaced by neural network specialised in image segmentation) and trained them with high-resolution remote sensing imagery and their correspondent ground-truth masks. Our models achieved mean increases in performance metrics of 2.7-3.5%, when compared to the original state-of-the-art semantic segmentation architectures trained from scratch for the same task. The best-performing model was integrated on a web platform that handles the evaluation of large areas, the association of the semantic predictions with geographical coordinates, the conversion of the tiles’ format, and the generation of GeoTIFF results (compatible with geospatial databases). Thirdly, the road surface area extraction task is generally carried out via semantic segmentation over remotely sensed imagery—however, this supervised learning task can be considered very costly because it requires remote sensing images labelled at pixel level and the results are not always satisfactory (presence of discontinuities, overlooked connection points, or isolated road segments). We consider that unsupervised learning (not requiring labelled data) can be employed for post-processing the geometries of geospatial objects extracted via semantic segmentation. For this reason, we also approached the post-processing of the road surface areas obtained with the best performing segmentation model to improve the initial segmentation predictions. In this line, we proposed two post-processing operations based on conditional generative learning for deep inpainting and image-to-image translation operations and trained the networks to learn the distribution of the road network present in official cartography, using a novel dataset covering representative areas of Spain. The first proposed conditional Generative Adversarial Network (cGAN) model was trained for deep inpainting operation and obtained improvements in performance metrics of maximum 1.3%. The second cGAN model was trained for image-to-image translation, is based on a popular model heavily modified for computational efficiency (a 92.4% decrease in the number of parameters in the generator network and a 61.3% decrease in the discriminator network), and achieved a maximum increase of 11.6% in performance metrics. We also conducted a qualitative comparison to visually assess the effectiveness of the generative operations and observed great improvements with respect to the initial semantic segmentation predictions. Lastly, we proposed an end-to-end processing strategy that combines image classification, semantic segmentation, and post-processing operations to extract containing road surface area extraction from high-resolution aerial orthophotography. The training of the model components was carried out on a large-scale dataset containing more than 537,500 tiles, covering approximately 20,800 km2 of the Spanish territory, manually tagged at pixel level. The consecutive execution of the resulting deep learning models delivered higher quality results when compared to state-of-the-art implementations trained for the same task. The versatility and flexibility of the solution given by the disjointed execution of the three separate sub-operations proved its effectiveness and economic efficiency and enables the integration of a web application that alleviates the manipulation of geospatial data, while allowing for an easy integration of future models and algorithms. Resuming, applying the proposed models resulted from this PhD thesis translates to operations aimed to check if the latest existing aerial orthoimages contains the studied continuous geospatial element, to obtain an approximation of its surface area using supervised learning and to improve the initial segmentation results with post-processing methods based on conditional generative learning. The results obtained with the proposed end-to-end-solution presented in this PhD thesis improve the state-of-the-art in the field of road extraction with deep learning techniques and prove the appropriateness of applying the proposed extraction workflow for a more robust and more efficient extraction operation of the road transport network. We strongly believe that the processing strategy can be applied to enhance other similar extraction tasks of continuous geospatial elements (such as the mapping of riverbeds, or railroads), or serve as a base for developing additional extraction workflows of geospatial objects from remote sensing images. Note de contenu : 1- Introduction
2- Methodology
3- Theoretical framework
4- Litterature review
5- Road recognition: A framework based on nestion of convolutional neuronal networks and transfer learning to regognise road elements
6- Road segmentation: An approach based on hybrid semantic segmentation models to extract the surface area of rod elements from aerial orthoimagery
7- Post-processing of semantic segmentation predictions I: A conditional generative adversial network to improve the extraction of road surface areas via deep inpainting operations
8- Post-processing of semantic segmentation predictions II: A lightweight conditional generative adversial network to improve the extraction of road surface areas via image-to-image translation
9- An end-to-end road extraction solution based on regonition, segmentation, and post-processing operations for a large-scale mapping of the road transport network from aerial orthophotography
10- ConclusionsNuméro de notice : 24069 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : Thèse de Doctorat : Topographie, Géodésie et cartographie : Universidad politécnica de Madrid : 2022 DOI : 10.20868/UPM.thesis.70152 En ligne : https://doi.org/10.20868/UPM.thesis.70152 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102113 Deep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)
[article]
Titre : Deep image translation with an affinity-based change prior for unsupervised multimodal change detection Type de document : Article/Communication Auteurs : Luigi Tommaso Luppino, Auteur ; Michael Kampffmeyer, Auteur ; filipo Maria Bianchi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4700422 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] architecture de réseau
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology. Numéro de notice : A2022-027 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3056196 Date de publication en ligne : 17/02/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3056196 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99263
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 1 (January 2022) . - n° 4700422[article]
Titre : Deep learning architectures for onboard satellite image analysis Type de document : Thèse/HDR Auteurs : Gaétan Bahl, Auteur ; Florent Lafarge, Directeur de thèse Editeur : Nice : Université Côte d'Azur Année de publication : 2022 Importance : 120 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse de Doctorat de l'Université Côte d’Azur, Spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] contour
[Termes IGN] détection d'objet
[Termes IGN] extraction du réseau routier
[Termes IGN] forêt
[Termes IGN] image satellite
[Termes IGN] nuage
[Termes IGN] régression
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau neuronal récurrent
[Termes IGN] segmentation sémantiqueIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Les progrès des satellites d'observation de la Terre à haute résolution et la réduction des temps de revisite introduite par la création de constellations de satellites ont conduit à la création quotidienne de grandes quantités d'images (des centaines de Teraoctets par jour). Simultanément, la popularisation des techniques de Deep Learning a permis le développement d'architectures capables d'extraire le contenu sémantique des images. Bien que ces algorithmes nécessitent généralement l'utilisation de matériel puissant, des accélérateurs d'inférence IA de faible puissance ont récemment été développés et ont le potentiel d'être utilisés dans les prochaines générations de satellites, ouvrant ainsi la possibilité d'une analyse embarquée des images satellite. En extrayant les informations intéressantes des images satellite directement à bord, il est possible de réduire considérablement l'utilisation de la bande passante, du stockage et de la mémoire. Les applications actuelles et futures, telles que la réponse aux catastrophes, l'agriculture de précision et la surveillance du climat, bénéficieraient d'une latence de traitement plus faible, voire d'alertes en temps réel. Dans cette thèse, notre objectif est double : D'une part, nous concevons des architectures de Deep Learning efficaces, capables de fonctionner sur des périphériques de faible puissance, tels que des satellites ou des drones, tout en conservant une précision suffisante. D'autre part, nous concevons nos algorithmes en gardant à l'esprit l'importance d'avoir une sortie compacte qui peut être efficacement calculée, stockée, transmise au sol ou à d'autres satellites dans une constellation. Tout d'abord, en utilisant des convolutions séparables en profondeur et des réseaux neuronaux récurrents convolutionnels, nous concevons des réseaux neuronaux de segmentation sémantique efficaces avec un faible nombre de paramètres et une faible utilisation de la mémoire. Nous appliquons ces architectures à la segmentation des nuages et des forêts dans les images satellites. Nous concevons également une architecture spécifique pour la segmentation des nuages sur le FPGA d'OPS-SAT, un satellite lancé par l'ESA en 2019, et réalisons des expériences à bord à distance. Deuxièmement, nous développons une architecture de segmentation d'instance pour la régression de contours lisses basée sur une représentation à coefficients de Fourier, qui permet de stocker et de transmettre efficacement les formes des objets détectés. Nous évaluons la performance de notre méthode sur une variété de dispositifs informatiques à faible puissance. Enfin, nous proposons une architecture d'extraction de graphes routiers basée sur une combinaison de Fully Convolutional Networks et de Graph Neural Networks. Nous montrons que notre méthode est nettement plus rapide que les méthodes concurrentes, tout en conservant une bonne précision. Note de contenu : 1. Introduction
1.1 Context and motivation
1.2 Methods and Challenges
1.3 Contributions and outline
2. On-board image segmentation with compact networks
2.1 Introduction
2.2 Related works
2.3 Proposed architectures
2.4 Experiments on cloud segmentation
2.5 Experiments on forest segmentation
2.6 Conclusion
3. Recurrent convolutional networks for semantic segmentation
3.1 Introduction
3.2 Method
3.3 Experiments
3.4 Conclusion and future works
4. Regression of compact object contours
4.1 Introduction
4.2 Related Work
4.3 Method
4.4 Experiments
4.5 Conclusion
5. Road graph extraction
5.1 Introduction
5.2 Related Works
5.3 Method
5.4 Experiments
5.5 Limitations
5.6 Other uses of our method
5.7 Conclusion
6. Conclusion and Perspectives
6.1 Summary
6.2 Limitations and perspectives
6.3 Publications
6.4 Carbon Impact StatementNuméro de notice : 26912 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Côte d'Azur : 2022 Organisme de stage : Inria Sophia-Antipolis Méditerranée nature-HAL : Thèse DOI : sans Date de publication en ligne : 27/09/2022 En ligne : https://tel.hal.science/tel-03789667v2 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101955
Titre : Deep learning based vehicle detection in aerial imagery Type de document : Monographie Auteurs : Lars Wilko Sommer, Éditeur scientifique Editeur : Karlsruhe [Allemagne] : KIT Scientific Publishing Année de publication : 2022 Importance : 276 p. Format : 15 x 21 cm ISBN/ISSN/EAN : 978-3-7315-1113-7 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] ancre
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtre
[Termes IGN] image aérienne
[Termes IGN] véhiculeRésumé : (éditeur) This book proposes a novel deep learning based detection method, focusing on vehicle detection in aerial imagery recorded in top view. The base detection framework is extended by two novel components to improve the detection accuracy by enhancing the contextual and semantical content of the employed feature representation. To reduce the inference time, a lightweight CNN architecture is proposed as base architecture and a novel module that restricts the search area is introduced. Note de contenu : 1- Introduction
2- Related work
3- Concept
4- Experimental setup
5- Base framework
6- Integration of contextual knowledge
7- Runtime optimization
8- Evaluation
9- Conclusions and outlookNuméro de notice : 28685 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Recueil / ouvrage collectif DOI : 10.5445/KSP/1000135415 En ligne : https://doi.org/10.5445/KSP/1000135415 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100015 Development of object detectors for satellite images by deep learning / Alissa Kouraeva (2022)PermalinkEffective triplet mining improves training of multi-scale pooled CNN for image retrieval / Federico Vaccaro in Machine Vision and Applications, vol 33 n° 1 (January 2022)PermalinkPermalinkPermalinkHistograms of oriented mosaic gradients for snapshot spectral image description / Lulu Chen in ISPRS Journal of photogrammetry and remote sensing, vol 183 (January 2022)PermalinkMulti-view urban scene classification with a complementary-information learning model / Wanxuan Geng in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 1 (January 2022)PermalinkPermalinkPermalinkPermalinkEfficient occluded road extraction from high-resolution remote sensing imagery / Dejun Feng in Remote sensing, vol 13 n° 24 (December-2 2021)Permalink