Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > reconnaissance de formes
reconnaissance de formesSynonyme(s)reconnaissance des formes |
Documents disponibles dans cette catégorie (302)



Etendre la recherche sur niveau(x) vers le bas
Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
![]()
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Adversarial defenses for object detectors based on Gabor convolutional layers / Abdollah Amirkhani in The Visual Computer, vol 38 n° 6 (June 2022)
![]()
[article]
Titre : Adversarial defenses for object detectors based on Gabor convolutional layers Type de document : Article/Communication Auteurs : Abdollah Amirkhani, Auteur ; Mohammad Karimi, Auteur Année de publication : 2022 Article en page(s) : pp 1929 - 1944 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] filtre de Gabor
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Despite their many advantages and positive features, the deep neural networks are extremely vulnerable against adversarial attacks. This drawback has substantially reduced the adversarial accuracy of the visual object detectors. To make these object detectors robust to adversarial attacks, a new Gabor filter-based method has been proposed in this paper. This method has then been applied on the YOLOv3 with different backbones, the SSD with different input sizes and on the FRCNN; and thus, six robust object detector models have been presented. In order to evaluate the efficacy of the models, they have been subjected to adversarial training via three types of targeted attacks (TOG-fabrication, TOG-vanishing, and TOG-mislabeling) and three types of untargeted random attacks (DAG, RAP, and UEA). The best average accuracy (49.6%) was achieved by the YOLOv3-d model, and for the PASCAL VOC dataset. This is far superior to the best performance and accuracy and obtained in previous works (25.4%). Empirical results show that, while the presented approach improves the adversarial accuracy of the object detector models, it does not affect the performance of these models on clean data. Numéro de notice : A2022-382 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-021-02256-6 Date de publication en ligne : 24/07/2021 En ligne : https://doi.org/10.1007/s00371-021-02256-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100651
in The Visual Computer > vol 38 n° 6 (June 2022) . - pp 1929 - 1944[article]An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps / Paweł Cybulski in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)
![]()
[article]
Titre : An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps Type de document : Article/Communication Auteurs : Paweł Cybulski, Auteur Année de publication : 2022 Article en page(s) : n° 273 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse de groupement
[Termes IGN] analyse visuelle
[Termes IGN] carte choroplèthe
[Termes IGN] cartographie animée
[Termes IGN] lecture de carte
[Termes IGN] oculométrie
[Termes IGN] reconnaissance de formes
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] CartologieRésumé : (auteur) Animated cartographic visualization incorporates the concept of geomedia presented in this Special Issue. The presented study aims to examine the effectiveness of spatial pattern and temporal trend recognition on animated choropleth maps. In a controlled laboratory experiment with participants and eye tracking, fifteen animated maps were used to show a different spatial patterns and temporal trends. The participants’ task was to correctly detect the patterns and trends on a choropleth map. The study results show that effective spatial pattern and temporal trend recognition on a choropleth map is related to participants’ visual behavior. Visual attention clustered in the central part of the choropleth map supports effective spatio-temporal relationship recognition. The larger the area covered by the fixation cluster, the higher the probability of correct temporal trend and spatial pattern recognition. However, animated choropleth maps are more suitable for presenting temporal trends than spatial patterns. Understanding the difficulty in the correct recognition of spatio-temporal relationships might be a reason for implementing techniques that support effective visual searches such as highlighting, cartographic redundancy, or interactive tools. For end-users, the presented study reveals the necessity of the application of a specific visual strategy. Focusing on the central part of the map is the most effective strategy for the recognition of spatio-temporal relationships. Numéro de notice : A2022-358 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11050273 Date de publication en ligne : 20/04/2022 En ligne : https://doi.org/10.3390/ijgi11050273 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100571
in ISPRS International journal of geo-information > vol 11 n° 5 (May 2022) . - n° 273[article]Linear building pattern recognition in topographical maps combining convex polygon decomposition / Zhiwei Wei in Geocarto international, vol 37 n° inconnu ([25/01/2022])
![]()
[article]
Titre : Linear building pattern recognition in topographical maps combining convex polygon decomposition Type de document : Article/Communication Auteurs : Zhiwei Wei, Auteur ; Su Ding, Auteur ; Lu Cheng, Auteur ; et al., Auteur Année de publication : 2022 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] carte topographique
[Termes IGN] construction
[Termes IGN] décomposition
[Termes IGN] détection du bâti
[Termes IGN] forme linéaire
[Termes IGN] généralisation cartographique automatisée
[Termes IGN] Ordnance Survey (UK)
[Termes IGN] polygone
[Termes IGN] reconnaissance de formesRésumé : (auteur) Building patterns are crucial for urban form understanding, automated map generalization, and 3 D city model visualization. The existing studies have recognized various building patterns based on visual perception rules in which buildings are considered as a whole. However, some visually aware patterns may fail to be recognized with these approaches because human vision is also proved as a part-based system. This paper first proposed an approach for linear building pattern recognition combining convex polygon decomposition. Linear building patterns including collinear patterns and curvilinear patterns are defined according to the proximity, similarity, and continuity between buildings. Linear building patterns are then recognized by combining convex polygon decomposition, in which a building can be decomposed into sub-buildings for pattern recognition. A novel node concavity is developed based on polygon skeletons which is applicable for building polygons with holes or not in the building decomposition. And building’s orthogonal features are also considered in the building decomposition. Two datasets collected from Ordnance Survey (OS) were used in the experiments to verify the effectiveness of the proposed approach. The results indicate that our approach achieves 25.57% higher precision and 32.23% higher recall in collinear pattern recognition and 15.67% higher precision and 18.52% higher recall in curvilinear pattern recognition when compared to existing approaches. Recognition of other kinds of building patterns including T-shaped and C-shaped patterns combining convex polygon decomposition are also discussed in this approach. Numéro de notice : A2022-263 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/10106049.2022.2055794 Date de publication en ligne : 27/03/2022 En ligne : https://doi.org/10.1080/10106049.2022.2055794 Format de la ressource électronique : 27/03/2022 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100260
in Geocarto international > vol 37 n° inconnu [25/01/2022][article]
Titre : Event-driven feature detection and tracking for visual SLAM Type de document : Thèse/HDR Auteurs : Ignacio Alzugaray, Auteur Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2022 Note générale : bibliographie
thesis submitted to attain the degree of Doctor of Sciences of ETH ZurichLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] caméra d'événement
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] détection d'objet
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image floue
[Termes IGN] reconnaissance de formes
[Termes IGN] séquence d'images
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Traditional frame-based cameras have become the de facto sensor of choice for a multitude of applications employing Computer Vision due to their compactness, low cost, ubiquity, and ability to provide information-rich exteroceptive measurements. Despite their dominance in the field, these sensors exhibit limitations in common, real-world scenarios where detrimental effects, such as motion blur during high-speed motion or over-/underexposure in scenes with poor illumination, are prevalent. Challenging the dominance of traditional cameras, the recent emergence of bioinspired event cameras has opened up exciting research possibilities for robust perception due to their high-speed sensing, High-Dynamic-Range capabilities, and low power consumption. Despite their promising characteristics, event cameras present numerous challenges due to their unique output: a sparse and asynchronous stream of events, only capturing incremental perceptual changes at individual pixels. This radically different sensing modality renders most of the traditional Computer Vision algorithms incompatible without substantial prior adaptation, as they are initially devised for processing sequences of images captured at fixed frame-rate. Consequently, the bulk of existing event-based algorithms in the literature have opted to discretize the event stream into batches and process them sequentially, effectively reverting to frame-like representations in an attempt to mimic the processing of image sequences from traditional sensors. Such event-batching algorithms have demonstrably outperformed other alternative frame-based algorithms in scenarios where the quality of conventional intensity images is severely compromised, unveiling the inherent potential of these new sensors and popularizing them. To date, however, many newly designed event-based algorithms still rely on a contrived discretization of the event stream for its processing, suggesting that the full potential of event cameras is yet to be harnessed by processing their output more naturally. This dissertation departs from the mere adaptation of traditional frame-based approaches and advocates instead for the development of new algorithms integrally designed for event cameras to fully exploit their advantageous characteristics. In particular, the focus of this thesis lies on describing a series of novel strategies and algorithms that operate in a purely event-driven fashion, \ie processing each event as soon as it gets generated without any intermediate buffering of events into arbitrary batches and thus avoiding any additional latency in their processing. Such event-driven processes present additional challenges compared to their simpler event-batching counterparts, which, in turn, can largely be attributed to the requirement to produce reliable results at event-rate, entailing significant practical implications for their deployment in real-world applications. The body of this thesis addresses the design of event-driven algorithms for efficient and asynchronous feature detection and tracking with event cameras, covering alongside crucial elements on pattern recognition and data association for this emerging sensing modality. In particular, a significant portion of this thesis is devoted to the study of visual corners for event cameras, leading to the design of innovative event-driven approaches for their detection and tracking as corner-events. Moreover, the presented research also investigates the use of generic patch-based features and their event-driven tracking for the efficient retrieval of high-quality feature tracks. All the developed algorithms in this thesis serve as crucial stepping stones towards a completely event-driven, feature-based Simultaneous Localization And Mapping (SLAM) pipeline. This dissertation extends upon established concepts from state-of-the-art, event-driven methods and further explores the limits of the event-driven paradigm in realistic monocular setups. While the presented approaches solely rely on event-data, the gained insights are seminal to future investigations targeting the combination of event-based vision with other, complementary sensing modalities. The research conducted here paves the way towards a new family of event-driven algorithms that operate efficiently, robustly, and in a scalable manner, envisioning a potential paradigm shift in event-based Computer Vision. Note de contenu : 1- Introduction
2- Contribution
3- Conclusion and outlookNuméro de notice : 28699 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD Thesis : Sciences : ETH Zurich : 2022 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/541700 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100470 Searching for an optimal hexagonal shaped enumeration unit size for effective spatial pattern recognition in choropleth maps / Izabela Karsznia in ISPRS International journal of geo-information, vol 10 n° 9 (September 2021)
PermalinkA typification method for linear building groups based on stroke simplification / Xiao Wang in Geocarto international, vol 36 n° 15 ([15/08/2021])
PermalinkThe point-descriptor-precedence representation for point configurations and movements / Amna Qayyum in International journal of geographical information science IJGIS, vol 35 n° 7 (July 2021)
PermalinkTrajectory and image-based detection and identification of UAV / Yicheng Liu in The Visual Computer, vol 37 n° 7 (July 2021)
PermalinkReconnaissance automatique d’objets pour le jumeau numérique ferroviaire à partir d’imagerie aérienne / Valentin Desbiolles in XYZ, n° 167 (juin 2021)
PermalinkMultiple convolutional features in Siamese networks for object tracking / Zhenxi Li in Machine Vision and Applications, vol 32 n° 3 (May 2021)
PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)
PermalinkRecognition of varying size scene images using semantic analysis of deep activation maps / Shikha Gupta in Machine Vision and Applications, vol 32 n° 2 (March 2021)
PermalinkActivity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)
PermalinkEmotional habitat: mapping the global geographic distribution of human emotion with physical environmental factors using a species distribution model / Yizhuo Li in International journal of geographical information science IJGIS, vol 35 n° 2 (February 2021)
Permalink