Détail de l'éditeur
Computer vision foundation CVF |
Documents disponibles chez cet éditeur (7)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Titre : DeepSim-Nets: Deep Similarity Networks for stereo image matching Type de document : Article/Communication Auteurs : Mohamed Ali Chebbi, Auteur ; Ewelina Rupnik , Auteur ; Marc Pierrot-Deseilligny , Auteur ; Paul Lopes, Auteur Editeur : Computer vision foundation CVF Année de publication : 2023 Conférence : CVPR 2023, IEEE Conference on Computer Vision and Pattern Recognition 18/06/2023 22/06/2023 Vancouver Colombie britannique - Canada OA Proceedings Importance : pp 2096 - 2104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] chaîne de traitement
[Termes IGN] géométrie de l'image
[Termes IGN] géométrie épipolaire
[Termes IGN] réseau neuronal profondIndex. décimale : 35.20 Traitement d'image Résumé : (auteur) We present three multi-scale similarity learning architectures, or DeepSim networks. These models learn pixel-level matching with a contrastive loss and are agnostic to the geometry of the considered scene. We establish a middle ground between hybrid and end-to-end approaches by learning to densely allocate all corresponding pixels of an epipolar pair at once. Our features are learnt on large image tiles to be expressive and capture the scene's wider context. We also demonstrate that curated sample mining can enhance the overall robustness of the predicted similarities and improve the performance on radiometrically homogeneous areas. We run experiments on aerial and satellite datasets. Our DeepSim-Nets outperform the baseline hybrid approaches and generalize better to unseen scene geometries than end-to-end methods. Our flexible architecture can be readily adopted in standard multi-resolution image matching pipelines. The code is available at https://github.com/DaliCHEBBI/DeepSimNets. Numéro de notice : C2023-007 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : sans En ligne : https://openaccess.thecvf.com/content/CVPR2023W/EarthVision/html/Chebbi_DeepSim- [...] Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103281
Titre : Pointless global bundle adjustment with relative motions Hessians Type de document : Article/Communication Auteurs : Ewelina Rupnik , Auteur ; Marc Pierrot-Deseilligny , Auteur Editeur : Computer vision foundation CVF Année de publication : 2023 Conférence : CVPR 2023, IEEE Conference on Computer Vision and Pattern Recognition 18/06/2023 22/06/2023 Vancouver Colombie britannique - Canada OA Proceedings Importance : pp 6517 - 6525 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] compensation par faisceaux
[Termes IGN] estimation de pose
[Termes IGN] matriceIndex. décimale : 33.30 Photogrammétrie numérique Résumé : (auteur) Bundle adjustment (BA) is the standard way to optimise camera poses and to produce sparse representations of a scene. However, as the number of camera poses and features grows, refinement through bundle adjustment becomes inefficient. Inspired by global motion averaging methods, we propose a new bundle adjustment objective which does not rely on image features' reprojection errors yet maintains precision on par with classical BA. Our method averages over relative motions while implicitly incorporating the contribution of the structure in the adjustment. To that end, we weight the objective function by local hessian matrices-a by-product of local bundle adjustments performed on relative motions (eg, pairs or triplets) during the pose initialisation step. Such hessians are extremely rich as they encapsulate both the features' random errors and the geometric configuration between the cameras. These pieces of information propagated to the global frame help to guide the final optimisation in a more rigorous way. We argue that this approach is an upgraded version of the motion averaging approach and demonstrate its effectiveness on both photogrammetric datasets and computer vision benchmarks. Numéro de notice : C2023-008 Affiliation des auteurs : UGE-LASTIG (2020- ) Autre URL associée : vers OA paper Thématique : IMAGERIE/INFORMATIQUE/MATHEMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : sans En ligne : https://openaccess.thecvf.com/content/CVPR2023W/PCV/papers/Rupnik_Pointless_Glob [...] Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103282 PSMNet-FusionX3 : LiDAR-guided deep learning stereo dense matching on aerial images / Teng Wu (2023)
Titre : PSMNet-FusionX3 : LiDAR-guided deep learning stereo dense matching on aerial images Type de document : Article/Communication Auteurs : Teng Wu , Auteur ; Bruno Vallet , Auteur ; Marc Pierrot-Deseilligny , Auteur Editeur : Computer vision foundation CVF Année de publication : 2023 Conférence : CVPR 2023, IEEE Conference on Computer Vision and Pattern Recognition workshops 18/06/2023 22/06/2023 Vancouver Colombie britannique - Canada OA Proceedings Importance : pp 6526 - 6535 Note générale : bibliographie
voir aussi https://openaccess.thecvf.com/content/CVPR2023W/PCV/supplemental/Wu_PSMNet-FusionX3_LiDAR-Guided_Deep_CVPRW_2023_supplemental.pdfLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image aérienne à axe vertical
[Termes IGN] scène 3D
[Termes IGN] Triangulated Irregular NetworkRésumé : (auteur) Dense image matching (DIM) and LiDAR are two complementary techniques for recovering the 3D geometry of real scenes. While DIM provides dense surfaces, they are often noisy and contaminated with outliers. Conversely, LiDAR is more accurate and robust, but less dense and more expensive compared to DIM. In this work, we investigate learning-based methods to refine surfaces produced by photogrammetry with sparse LiDAR point clouds. Unlike the current state-of-the-art approaches in the computer vision community, our focus is on aerial acquisitions typical in photogrammetry. We propose a densification pipeline that adopts a PSMNet backbone with triangulated irregular network interpolation based expansion, feature enhancement in cost volume, and conditional cost volume normalization, i.e. PSMNet-FusionX3. Our method works better on low density and is less sensitive to distribution, demonstrating its effectiveness across a range of LiDAR point cloud densities and distributions, including analyses of dataset shifts. Furthermore, we have made both our aerial (image and disparity) dataset and code available for public use. Further information can be found at https://github.com/ whuwuteng/PSMNet-FusionX3. Numéro de notice : C2023-006 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication DOI : sans En ligne : https://openaccess.thecvf.com/content/CVPR2023W/PCV/papers/Wu_PSMNet-FusionX3_Li [...] Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103277
Titre : Multi-layer modeling of dense vegetation from aerial LiDAR scans Type de document : Article/Communication Auteurs : Ekaterina Kalinicheva , Auteur ; Loïc Landrieu , Auteur ; Clément Mallet , Auteur ; Nesrine Chehata , Auteur Editeur : Computer vision foundation CVF Année de publication : 2022 Projets : 1-Pas de projet / Conférence : EarthVision 2022, Large Scale Computer Vision for Remote Sensing Imagery, workshop joint to CVPR 2022 19/06/2022 24/06/2022 New Orleans Louisiane - Etats-Unis OA Proceedings Importance : pp 1341 - 1350 Format : 21 x 30 cm Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte d'occupation du sol
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] étage de végétation
[Termes IGN] foresterie
[Termes IGN] maillage
[Termes IGN] parcelle forestière
[Termes IGN] reconstruction d'objet
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (auteur) The analysis of the multi-layer structure of wild forests is an important challenge of automated large-scale forestry. While modern aerial LiDARs offer geometric information across all vegetation layers, most datasets and methods focus only on the segmentation and reconstruction of the top of canopy. We release WildForest3D, which consists of 29 study plots and over 2000 individual trees across 47 000m2 with dense 3D annotation, along with occupancy and height maps for 3 vegetation layers: ground vegetation, understory, and overstory. We propose a 3D deep net- work architecture predicting for the first time both 3D point- wise labels and high-resolution layer occupancy rasters simultaneously. This allows us to produce a precise estimation of the thickness of each vegetation layer as well as the corresponding watertight meshes, therefore meeting most forestry purposes. Both the dataset and the model are released in open access: https://github.com/ ekalinicheva/multi_layer_vegetation. Numéro de notice : C2022-007 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers CVF Thématique : FORET/IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/CVPRW56347.2022.00140 Date de publication en ligne : 25/04/2022 En ligne : https://arxiv.org/abs/2204.11620 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100509 Satellite image time series classification with pixel-set encoders and temporal self-attention / Vivien Sainte Fare Garnot (2020)
Titre : Satellite image time series classification with pixel-set encoders and temporal self-attention Type de document : Article/Communication Auteurs : Vivien Sainte Fare Garnot , Auteur ; Loïc Landrieu , Auteur ; Sébastien Giordano , Auteur ; Nesrine Chehata , Auteur Editeur : Computer vision foundation CVF Année de publication : 2020 Projets : 1-Pas de projet / Conférence : CVPR 2020, IEEE Conference on Computer Vision and Pattern Recognition 14/06/2020 19/06/2020 en ligne Chine Open Access Proceedings Importance : pp 12325 - 12334 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] classification automatique
[Termes IGN] classification orientée objet
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] géocodage
[Termes IGN] image multibande
[Termes IGN] image satellite
[Termes IGN] parcelle agricole
[Termes IGN] politique agricole commune
[Termes IGN] série temporelle
[Termes IGN] utilisation du solRésumé : (auteur) Satellite image time series, bolstered by their growing availability, are at the forefront of an extensive effort towards automated Earth monitoring by international institutions. In particular, large-scale control of agricultural parcels is an issue of major political and economic importance. In this regard, hybrid convolutional-recurrent neural architectures have shown promising results for the automated classification of satellite image time series.We propose an alternative approach in which the convolutional layers are advantageously replaced with encoders operating on unordered sets of pixels to exploit the typically coarse resolution of publicly available satellite images. We also propose to extract temporal features using a bespoke neural architecture based on self-attention instead of recurrent networks. We demonstrate experimentally that our method not only outperforms previous state-of-the-art approaches in terms of precision, but also significantly decreases processing time and memory requirements. Lastly, we release a large openaccess annotated dataset as a benchmark for future work on satellite image time series. Numéro de notice : C2020-016 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv/vers CVF Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/CVPR42600.2020.01234 Date de publication en ligne : 05/08/2020 En ligne : https://doi.org/10.1109/CVPR42600.2020.01234 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94225 Documents numériques
en open access
Satellite image time series classification - pdf préprintAdobe Acrobat PDF PermalinkPermalink