Descripteur
Documents disponibles dans cette catégorie (1685)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Multiple convolutional features in Siamese networks for object tracking / Zhenxi Li in Machine Vision and Applications, vol 32 n° 3 (May 2021)
[article]
Titre : Multiple convolutional features in Siamese networks for object tracking Type de document : Article/Communication Auteurs : Zhenxi Li, Auteur ; Guillaume-Alexandre Bilodeau, Auteur ; Wassim Bouachir, Auteur Année de publication : 2021 Article en page(s) : n° 59 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] poursuite de cible
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Siamese trackers demonstrated high performance in object tracking due to their balance between accuracy and speed. Unlike classification-based CNNs, deep similarity networks are specifically designed to address the image similarity problem and thus are inherently more appropriate for the tracking task. However, Siamese trackers mainly use the last convolutional layers for similarity analysis and target search, which restricts their performance. In this paper, we argue that using a single convolutional layer as feature representation is not an optimal choice in a deep similarity framework. We present a Multiple Features-Siamese Tracker (MFST), a novel tracking algorithm exploiting several hierarchical feature maps for robust tracking. Since convolutional layers provide several abstraction levels in characterizing an object, fusing hierarchical features allows to obtain a richer and more efficient representation of the target. Moreover, we handle the target appearance variations by calibrating the deep features extracted from two different CNN models. Based on this advanced feature representation, our method achieves high tracking accuracy, while outperforming the standard siamese tracker on object tracking benchmarks. The source code and trained models are available at https://github.com/zhenxili96/MFST. Numéro de notice : A2021-470 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01185-7 Date de publication en ligne : 11/03/2021 En ligne : https://doi.org/10.1007/s00138-021-01185-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97903
in Machine Vision and Applications > vol 32 n° 3 (May 2021) . - n° 59[article]A stacked dense denoising–segmentation network for undersampled tomograms and knowledge transfer using synthetic tomograms / Dimitrios Bellos in Machine Vision and Applications, vol 32 n° 3 (May 2021)
[article]
Titre : A stacked dense denoising–segmentation network for undersampled tomograms and knowledge transfer using synthetic tomograms Type de document : Article/Communication Auteurs : Dimitrios Bellos, Auteur ; Mark Basham, Auteur ; Tony Pridmore, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 75 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] acquisition de connaissances
[Termes IGN] apprentissage profond
[Termes IGN] échantillonnage
[Termes IGN] filtrage du bruit
[Termes IGN] rapport signal sur bruit
[Termes IGN] rayon X
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation sémantique
[Termes IGN] série temporelle
[Termes IGN] tomographieRésumé : (auteur) Over recent years, many approaches have been proposed for the denoising or semantic segmentation of X-ray computed tomography (CT) scans. In most cases, high-quality CT reconstructions are used; however, such reconstructions are not always available. When the X-ray exposure time has to be limited, undersampled tomograms (in terms of their component projections) are attained. This low number of projections offers low-quality reconstructions that are difficult to segment. Here, we consider CT time-series (i.e. 4D data), where the limited time for capturing fast-occurring temporal events results in the time-series tomograms being necessarily undersampled. Fortunately, in these collections, it is common practice to obtain representative highly sampled tomograms before or after the time-critical portion of the experiment. In this paper, we propose an end-to-end network that can learn to denoise and segment the time-series’ undersampled CTs, by training with the earlier highly sampled representative CTs. Our single network can offer two desired outputs while only training once, with the denoised output improving the accuracy of the final segmentation. Our method is able to outperform state-of-the-art methods in the task of semantic segmentation and offer comparable results in regard to denoising. Additionally, we propose a knowledge transfer scheme using synthetic tomograms. This not only allows accurate segmentation and denoising using less real-world data, but also increases segmentation accuracy. Finally, we make our datasets, as well as the code, publicly available. Numéro de notice : A2021-456 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-021-01196-4 Date de publication en ligne : 27/04/2021 En ligne : https://doi.org/10.1007/s00138-021-01196-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97902
in Machine Vision and Applications > vol 32 n° 3 (May 2021) . - n° 75[article]Parsing of urban facades from 3D point clouds based on a novel multi-view domain / Wei Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)
[article]
Titre : Parsing of urban facades from 3D point clouds based on a novel multi-view domain Type de document : Article/Communication Auteurs : Wei Wang, Auteur ; Yuan Xu, Auteur ; Yingchao Ren, Auteur ; Gang Wang, Auteur Année de publication : 2021 Article en page(s) : pp 283-293 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse comparative
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] façade
[Termes IGN] fusion de données
[Termes IGN] milieu urbain
[Termes IGN] précision de la classification
[Termes IGN] segmentation hiérarchique
[Termes IGN] segmentation multi-échelle
[Termes IGN] semis de pointsRésumé : (Auteur) Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance improvement of the proposed method are further analyzed. Numéro de notice : A2021-333 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.4.283 Date de publication en ligne : 01/04/2021 En ligne : https://doi.org/10.14358/PERS.87.4.283 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97531
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 4 (April 2021) . - pp 283-293[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021041 SL Revue Centre de documentation Revues en salle Disponible Digital surface model refinement based on projected images / Jiali Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 3 (March 2021)
[article]
Titre : Digital surface model refinement based on projected images Type de document : Article/Communication Auteurs : Jiali Wang, Auteur ; Yannan Chen, Auteur Année de publication : 2021 Article en page(s) : pp 181 - 187 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] appariement d'images
[Termes IGN] correction d'image
[Termes IGN] modèle numérique de surfaceRésumé : (Auteur) Currently, the practical solution to remove the errors and artifacts in the digital surface models (DSM ) through stereo images is still manual or semiautomatic editing those affected patches. Although some degrees of semiautomation can be gained, the DSM refinement remains a labor consuming and expensive process. This paper proposes a new method to correct errors in DSM or/and refine an existing coarse DSM. The method employs the concept of projected images together with some image matching techniques to correct/ refine the affected regions in DSM. Since projected images are used, the proposed method can greatly simplify the complicated coordinate transformations and pixel resampling; therefore, the errors/artifacts in DSM can be amended more efficiently and accurately. Several experiments demonstrate the practical usefulness of the proposed method under some scenarios, and some potential improvements are also pointed out to accommodate the various needs during refining DSM. Numéro de notice : A2021-242 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.3.181 Date de publication en ligne : 01/03/2021 En ligne : https://doi.org/10.14358/PERS.87.3.181 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97288
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 3 (March 2021) . - pp 181 - 187[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021031 SL Revue Centre de documentation Revues en salle Disponible Detection of pictorial map objects with convolutional neural networks / Raimund Schnürer in Cartographic journal (the), vol 58 n° 1 (February 2021)
[article]
Titre : Detection of pictorial map objects with convolutional neural networks Type de document : Article/Communication Auteurs : Raimund Schnürer, Auteur ; René Sieber, Auteur ; Jost Schmid-Lanter, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 50 - 68 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] base de données d'images
[Termes IGN] bibliothèque numérique
[Termes IGN] carte ancienne
[Termes IGN] carte numérique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données issues des réseaux sociaux
[Termes IGN] objet cartographique
[Termes IGN] pictogrammeRésumé : (auteur) In this work, realistically drawn objects are identified on digital maps by convolutional neural networks. For the first two experiments, 6200 images were retrieved from Pinterest. While alternating image input options, two binary classifiers based on Xception and InceptionResNetV2 were trained to separate maps and pictorial maps. Results showed that the accuracy is 95–97% to distinguish maps from other images, whereas maps with pictorial objects are correctly classified at rates of 87–92%. For a third experiment, bounding boxes of 3200 sailing ships were annotated in historic maps from different digital libraries. Faster R-CNN and RetinaNet were compared to determine the box coordinates, while adjusting anchor scales and examining configurations for small objects. A resulting average precision of 32% was obtained for Faster R-CNN and of 36% for RetinaNet. Research outcomes are relevant for trawling map images on the Internet and for enhancing the advanced search of digital map catalogues. Numéro de notice : A2021-651 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/00087041.2020.1738112 Date de publication en ligne : 11/09/2020 En ligne : https://doi.org/10.1080/00087041.2020.1738112 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98381
in Cartographic journal (the) > vol 58 n° 1 (February 2021) . - pp 50 - 68[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 030-2021011 RAB Revue Centre de documentation En réserve L003 Disponible A simplified ICA-based local similarity stereo matching / Suting Chen in The Visual Computer, vol 37 n° 2 (February 2021)PermalinkPermalinkApprentissage profond et IA pour l’amélioration de la robustesse des techniques de localisation par vision artificielle / Achref Elouni (2021)PermalinkPermalinkPermalinkPermalinkPermalinkPermalinkLearning-based representations and methods for 3D shape analysis, manipulation and reconstruction / Marie-Julie Rakotosaona (2021)PermalinkPlanimetric simplification and lexicographic optimal chains for 3D urban scene reconstruction / Julien Vuillamy (2021)PermalinkPermalinkCrater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis / David Solarna in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkCSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)PermalinkHeliport detection using artificial neural networks / Emre Baseski in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)PermalinkA novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)PermalinkVehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)PermalinkTowards structureless bundle adjustment with two- and three-view structure approximation / Ewelina Rupnik in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkA worldwide 3D GCP database inherited from 20 years of massive multi-satellite observations / Laure Chandelier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkA hybrid deep learning–based model for automatic car extraction from high-resolution airborne imagery / Mehdi Khoshboresh Masouleh in Applied geomatics, vol 12 n° 2 (June 2020)PermalinkAn integrated approach to registration and fusion of hyperspectral and multispectral images / Yuan Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)Permalink