Descripteur
Documents disponibles dans cette catégorie (16)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Marrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images / Guillemette Fonteix in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)
[article]
Titre : Marrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images Type de document : Article/Communication Auteurs : Guillemette Fonteix, Auteur ; M. Swaine, Auteur ; M. Leras, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 101 - 107 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] carte de confiance
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] fusion d'images
[Termes IGN] image optique
[Termes IGN] image Sentinel-MSI
[Termes IGN] segmentation sémantique
[Termes IGN] série temporelleRésumé : (auteur) The understanding of the Earth through global land monitoring from satellite images paves the way towards many applications including flight simulations, urban management and telecommunications. The twin satellites from the Sentinel-2 mission developed by the European Space Agency (ESA) provide 13 spectral bands with a high observation frequency worldwide. In this paper, we present a novel multi-temporal approach for land-cover classification of Sentinel-2 images whereby a time-series of images is classified using fully convolutional network U-Net models and then coupled by a developed probabilistic algorithm. The proposed pipeline further includes an automatic quality control and correction step whereby an external source can be introduced in order to validate and correct the deep learning classification. The final step consists of adjusting the combined predictions to the cloud-free mosaic built from Sentinel-2 L2A images in order for the classification to more closely match the reference mosaic image. Numéro de notice : A2021-492 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.5194/isprs-annals-V-3-2021-101-2021 Date de publication en ligne : 17/06/2021 En ligne : http://dx.doi.org/10.5194/isprs-annals-V-3-2021-101-2021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97957
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2021 (July 2021) . - pp 101 - 107[article]
Titre : Deep-learning for 3D reconstruction Type de document : Thèse/HDR Auteurs : Fabio Tosi, Auteur Editeur : Bologne [Italie] : Université de Bologne Année de publication : 2021 Format : 21 x 30 cm Note générale : bibliographie
PhD Thesis in Computer Science and EngineeringLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] carte de confiance
[Termes IGN] compréhension de l'image
[Termes IGN] profondeur
[Termes IGN] reconstruction 3D
[Termes IGN] réseau antagoniste génératif
[Termes IGN] vision stéréoscopiqueRésumé : (auteur) Depth perception is paramount for many computer vision applications such as autonomous driving and augmented reality. Despite active sensors (e.g., LiDAR, Time-of-Flight, struc- tured light) are quite diffused, they have severe shortcomings that could be potentially addressed by image-based sensors. Concerning this latter category, deep learning has enabled ground-breaking results in tackling well-known issues affecting the accuracy of systems inferring depth from a single or multiple images in specific circumstances (e.g., low textured regions, depth discontinuities, etc.), but also introduced additional concerns about the domain shift occurring between training and target environments and the need of proper ground truth depth labels to be used as the training signals in network learning. Moreover, despite the copious literature concerning confidence estimation for depth from a stereo setup, inferring depth uncertainty when dealing with deep networks is still a major challenge and almost unexplored research area, especially when dealing with a monocular setup. Finally, computational complexity is another crucial aspect to be considered when targeting most practical applications and hence is desirable not only to infer reliable depth data but do so in real-time and with low power requirements even on standard embedded devices or smartphones. Therefore, focusing on stereo and monocular setups, this thesis tackles major issues affecting methodologies to infer depth from images and aims at developing accurate and efficient frameworks for accurate 3D reconstruction on challenging environments. Note de contenu : Introduction
1- Related work
2- Datasets
3- Evaluation protocols
4- Confidence measures in a machine learning world
5- Efficient confidence measures for embedded stereo
6- Even more confident predictions with deep machine-learning
7- Beyond local reasoning for stereo confidence estimation with deep learning
8- Good cues to learn from scratch a confidence measure for passive depth sensors
9- Confidence estimation for ToF and stereo sensors and its application to depth data fusion
10- Learning confidence measures in the wild
11- Self-adapting confidence estimation for stereo
12- Leveraging confident points for accurate depth refinement on embedded systems
13- SMD-Nets: Stereo Mixture Density Networks
14- Real-time self-adaptive deep stereo
15- Guided stereo matching
16- Reversing the cycle: self-supervised deep stereo through enhanced monocular distillation
17- Learning end-to-end scene flow by distilling single tasks knowledge
18- Learning monocular depth estimation with unsupervised trinocular assumptions
19- Geometry meets semantics for semi-supervised monocular depth estimation
20- Generative Adversarial Networks for unsupervised monocular depth prediction
21- Learning monocular depth estimation infusing traditional stereo knowled
22- Towards real-time unsupervised monocular depth estimation on CPU
23- Enabling energy-efficient unsupervised monocular depth estimation on ARMv7-based platforms
24- Distilled semantics for comprehensive scene understanding from videos
25- On the uncertainty of self-supervised monocular depth estimation
ConclusionNuméro de notice : 28596 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : Thèse de Doctorat : Computer Science and Engineering : Bologne : 2021 DOI : 10.48676/unibo/amsdottorato/9816 En ligne : http://amsdottorato.unibo.it/9816/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99325 Region level SAR image classification using deep features and spatial constraints / Anjun Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 163 (May 2020)
[article]
Titre : Region level SAR image classification using deep features and spatial constraints Type de document : Article/Communication Auteurs : Anjun Zhang, Auteur ; Xuezhi Yang, Auteur ; Shuai Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 36-48 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] carte de confiance
[Termes IGN] champ aléatoire de Markov
[Termes IGN] chatoiement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image radar moirée
[Termes IGN] lissage de données
[Termes IGN] modélisation spatiale
[Termes IGN] précision de la classification
[Termes IGN] superpixelRésumé : (auteur) The region-level SAR image classification algorithms which combine CNN (Convolutional Neural Networks) with super-pixel have been proposed to enhance the classification accuracy compared with the pixel-level algorithms. However, the spatial constraints between the super-pixel regions are not considered, which may limit the performance of these algorithms. To address this problem, an RCC-MRF (RCC, Region Category Confidence-degree) and CNN based region-level SAR image classification algorithm which explores the deep features extracted by CNN and the spatial constraints between super-pixel regions is proposed in this paper. The initial labels of super-pixel regions are obtained using a voting strategy based on the predicted labels CNN. The unary energy function of RCC-MRF is designed to find the category that a region most probably belongs to by using the RCC term which is constructed based on the probability distributions over all categories of pixels predicted by CNN. The binary energy function of RCC-MRF explores the spatial constraints between the adjacent super-pixel regions. In our proposed algorithm, the pixel-level misclassifications can be reduced by the smoothing within regions and the region-level misclassifications will be rectified by minimizing the energy function of RCC-MRF. Experiments have been done on simulated and real SAR images to evaluate the performance of the proposed algorithm. The experimental results demonstrate that the proposed algorithm notably outperforms the other CNN-based region-level SAR image classification algorithms. Numéro de notice : A2020-136 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.03.001 Date de publication en ligne : 07/03/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.03.001 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94752
in ISPRS Journal of photogrammetry and remote sensing > vol 163 (May 2020) . - pp 36-48[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; José Marcato Junior, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 97 - 106 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Brésil
[Termes IGN] carte de confiance
[Termes IGN] Citrus (genre)
[Termes IGN] détection d'arbres
[Termes IGN] géolocalisation
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] inventaire de la végétation
[Termes IGN] réseau neuronal convolutif
[Termes IGN] vergerRésumé : (Auteur) Visual inspection has been a common practice to determine the number of plants in orchards, which is a labor-intensive and time-consuming task. Deep learning algorithms have demonstrated great potential for counting plants on unmanned aerial vehicle (UAV)-borne sensor imagery. This paper presents a convolutional neural network (CNN) approach to address the challenge of estimating the number of citrus trees in highly dense orchards from UAV multispectral images. The method estimates a dense map with the confidence that a plant occurs in each pixel. A flight was conducted over an orchard of Valencia-orange trees planted in linear fashion, using a multispectral camera with four bands in green, red, red-edge and near-infrared. The approach was assessed considering the individual bands and their combinations. A total of 37,353 trees were adopted in point feature to evaluate the method. A variation of σ (0.5; 1.0 and 1.5) was used to generate different ground truth confidence maps. Different stages (T) were also used to refine the confidence map predicted. To evaluate the robustness of our method, we compared it with two state-of-the-art object detection CNN methods (Faster R-CNN and RetinaNet). The results show better performance with the combination of green, red and near-infrared bands, achieving a Mean Absolute Error (MAE), Mean Square Error (MSE), R2 and Normalized Root-Mean-Squared Error (NRMSE) of 2.28, 9.82, 0.96 and 0.05, respectively. This band combination, when adopting σ = 1 and a stage (T = 8), resulted in an R2, MAE, Precision, Recall and F1 of 0.97, 2.05, 0.95, 0.96 and 0.95, respectively. Our method outperforms significantly object detection methods for counting and geolocation. It was concluded that our CNN approach developed to estimate the number and geolocation of citrus trees in high-density orchards is satisfactory and is an effective strategy to replace the traditional visual inspection method to determine the number of plants in orchards trees. Numéro de notice : A2020-045 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.010 Date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94525
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 97 - 106[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Multiview marker-free registration of forest terrestrial laser scanner data with embedded confidence metrics / David Kelbe in IEEE Transactions on geoscience and remote sensing, vol 55 n° 2 (February 2017)
[article]
Titre : Multiview marker-free registration of forest terrestrial laser scanner data with embedded confidence metrics Type de document : Article/Communication Auteurs : David Kelbe, Auteur ; Jan Van Aardt, Auteur ; Paul Romanczyk, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 729 - 741 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] acquisition de données
[Termes IGN] carte de confiance
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] forêt
[Termes IGN] mesure géométrique
[Termes IGN] New York (Etats-Unis ; état)
[Termes IGN] numérisation
[Termes IGN] semis de points
[Termes IGN] structure d'un peuplement forestier
[Termes IGN] superpositionRésumé : (Auteur) Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in order to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. This paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm. Numéro de notice : A2017-142 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2614251 En ligne : https://doi.org/10.1109/TGRS.2016.2614251 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=84630
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 2 (February 2017) . - pp 729 - 741[article]Pré-segmentation pour la classification faiblement supervisée de scènes urbaines à partir de nuages de points 3D LIDAR / Stéphane Guinard (2017)PermalinkWeakly supervised segmentation-aided classification of urban scenes from 3D LIDAR point clouds / Stéphane Guinard (2017)PermalinkSystematic effects in laser scanning and visualization by confidence regions / Karl Rudolf Koch in Journal of applied geodesy, vol 10 n° 4 (December 2016)PermalinkPermalinkA Random Forest class memberships based wrapper band selection criterion : application to hyperspectral / Arnaud Le Bris (2015)PermalinkAssessing reference dataset representativeness through confidence metrics based on information density / Giorgos Mountrakis in ISPRS Journal of photogrammetry and remote sensing, vol 78 (April 2013)PermalinkSemisupervised classification of remote sensing images with active queries / Jordi Munoz-Mari in IEEE Transactions on geoscience and remote sensing, vol 50 n° 10 Tome 1 (October 2012)PermalinkClassifications hiérarchiques orientées objet / Olivier de Joinville (2009)PermalinkEvaluation de la qualité d'une cartographie urbaine à l'aide d'images aériennes à haute résolution / Olivier de Joinville (2001)PermalinkProduction de modèles numériques de terrain par interférométrie et radargrammétrie / Stéphane Dupont in Bulletin [Société Française de Photogrammétrie et Télédétection], n° 148 (Octobre 1997)Permalink