Descripteur
Termes IGN > informatique > intelligence artificielle > vision par ordinateur
vision par ordinateurVoir aussi |
Documents disponibles dans cette catégorie (76)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
The integration of GPS/BDS real-time kinematic positioning and visual–inertial odometry based on smartphones / Zun Niu in ISPRS International journal of geo-information, vol 10 n° 10 (October 2021)
[article]
Titre : The integration of GPS/BDS real-time kinematic positioning and visual–inertial odometry based on smartphones Type de document : Article/Communication Auteurs : Zun Niu, Auteur ; Fugui Guo, Auteur ; Qiangqiang Shuai, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 699 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Navigation et positionnement
[Termes IGN] C++
[Termes IGN] centrale inertielle
[Termes IGN] filtre de Kalman
[Termes IGN] format RINEX
[Termes IGN] odomètre
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] positionnement par BeiDou
[Termes IGN] positionnement par GNSS
[Termes IGN] précision du positionnement
[Termes IGN] programmation informatique
[Termes IGN] robot
[Termes IGN] téléphone intelligent
[Termes IGN] vision par ordinateurRésumé : (auteur) The real-time kinematic positioning technique (RTK) and visual–inertial odometry (VIO) are both promising positioning technologies. However, RTK degrades in GNSS-hostile areas, where global navigation satellite system (GNSS) signals are reflected and blocked, while VIO is affected by long-term drift. The integration of RTK and VIO can improve the accuracy and robustness of positioning. In recent years, smartphones equipped with multiple sensors have become commodities and can provide measurements for integrating RTK and VIO. This paper verifies the feasibility of integrating RTK and VIO using smartphones, and we propose an improved algorithm to integrate RTK and VIO with better performance. We began by developing an Android smartphone application for data collection and then wrote a Python program to convert the data to a robot operating system (ROS) bag. Next, we established two ROS nodes to calculate the RTK results and accomplish the integration. Finally, we conducted experiments in urban areas to assess the integration of RTK and VIO based on smartphones. The results demonstrate that the integration improves the accuracy and robustness of positioning and that our improved algorithm reduces altitude deviation. Our work can aid navigation and positioning research, which is the reason why we open source the majority of the codes at our GitHub. Numéro de notice : A2021-800 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10100699 Date de publication en ligne : 14/10/2021 En ligne : https://doi.org/10.3390/ijgi10100699 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98852
in ISPRS International journal of geo-information > vol 10 n° 10 (October 2021) . - n° 699[article]Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification / Ming Cong in Geocarto international, vol 36 n° 18 ([01/10/2021])
[article]
Titre : Unsupervised self-adaptive deep learning classification network based on the optic nerve microsaccade mechanism for unmanned aerial vehicle remote sensing image classification Type de document : Article/Communication Auteurs : Ming Cong, Auteur ; Zhiye Wang, Auteur ; Yiting Tao, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 2065 - 2084 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] chromatopsie
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] échantillonnage d'image
[Termes IGN] filtrage numérique d'image
[Termes IGN] image captée par drone
[Termes IGN] vision
[Termes IGN] vision par ordinateurRésumé : (auteur) Unmanned aerial vehicle remote sensing images need to be precisely and efficiently classified. However, complex ground scenes produced by ultra-high ground resolution, data uniqueness caused by multi-perspective observations, and need for manual labelling make it difficult for current popular deep learning networks to obtain reliable references from heterogeneous samples. To address these problems, this paper proposes an optic nerve microsaccade (ONMS) classification network, developed based on multiple dilated convolution. ONMS first applies a Laplacian of Gaussian filter to find typical features of ground objects and establishes class labels using adaptive clustering. Then, using an image pyramid, multi-scale image data are mapped to the class labels adaptively to generate homologous reliable samples. Finally, an end-to-end multi-scale neural network is applied for classification. Experimental results show that ONMS significantly reduces sample labelling costs while retaining high cognitive performance, classification accuracy, and noise resistance—indicating that it has significant application advantages. Numéro de notice : A2021-707 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2019.1687593 Date de publication en ligne : 07/11/2019 En ligne : https://doi.org/10.1080/10106049.2019.1687593 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98602
in Geocarto international > vol 36 n° 18 [01/10/2021] . - pp 2065 - 2084[article]3D map creation using crowdsourced GNSS data / Terence Lines in Computers, Environment and Urban Systems, vol 89 (September 2021)
[article]
Titre : 3D map creation using crowdsourced GNSS data Type de document : Article/Communication Auteurs : Terence Lines, Auteur ; Anahid Basiri, Auteur Année de publication : 2021 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] approche participative
[Termes IGN] Bootstrap (statistique)
[Termes IGN] cartographie 3D
[Termes IGN] données GNSS
[Termes IGN] données localisées 2,5D
[Termes IGN] hauteur du bâti
[Termes IGN] interface de programmation
[Termes IGN] régression logistique
[Termes IGN] signal GNSS
[Termes IGN] trajet multiple
[Termes IGN] vision par ordinateurRésumé : (auteur) 3D maps are increasingly useful for many applications such as drone navigation, emergency services, and urban planning. However, creating 3D maps and keeping them up-to-date using existing technologies, such as laser scanners, is expensive. This paper proposes and implements a novel approach to generate 2.5D (otherwise known as 3D level-of-detail (LOD) 1) maps for free using Global Navigation Satellite Systems (GNSS) signals, which are globally available and are blocked only by obstacles between the satellites and the receivers. This enables us to find the patterns of GNSS signal availability and create 3D maps. The paper applies algorithms to GNSS signal strength patterns based on a boot-strapped technique that iteratively trains the signal classifiers while generating the map. Results of the proposed technique demonstrate the ability to create 3D maps using automatically processed GNSS data. The results show that the third dimension, i.e. height of the buildings, can be estimated with below 5 metre accuracy, which is the benchmark recommended by the CityGML standard. Numéro de notice : A2021-535 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2021.101671 Date de publication en ligne : 19/06/2021 En ligne : https://doi.org/10.1016/j.compenvurbsys.2021.101671 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97998
in Computers, Environment and Urban Systems > vol 89 (September 2021)[article]GIScience integrated with computer vision for the examination of old engravings and drawings / Motti Zohar in International journal of geographical information science IJGIS, vol 35 n° 9 (September 2021)
[article]
Titre : GIScience integrated with computer vision for the examination of old engravings and drawings Type de document : Article/Communication Auteurs : Motti Zohar, Auteur ; Ilan Shimshoni, Auteur Année de publication : 2021 Article en page(s) : pp 1703 - 1724 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] carte en 3D
[Termes IGN] carte profonde
[Termes IGN] dessin
[Termes IGN] extraction de données
[Termes IGN] Israël
[Termes IGN] Jérusalem
[Termes IGN] patrimoine culturel
[Termes IGN] paysage
[Termes IGN] Ransac (algorithme)
[Termes IGN] système d'information géographique
[Termes IGN] vision par ordinateurRésumé : (auteur) Landscape reconstructions and deep maps are two major approaches in cultural heritage studies. In general, they require the use of historical visual sources such as maps, graphic artworks, and photographs presenting areal scenes, from which one can extract spatial information. However, photographs, the most accurate and reliable source for scenery reconstruction, are available only from the second half of the 19th century onward. Thus, for earlier periods one can rely only on old artworks. Nevertheless, the accuracy and inclusiveness of old artworks are often questionable and must be verified carefully. In this paper, we use GIScience methods with computer-vision capabilities to interrogate old engravings and drawings as well as to develop a new approach for extracting spatial information from these scenic artworks. We have inspected four old depictions of Jerusalem and Tiberias (Israel) created between the 17th and 19th centuries. Using visibility analysis and a RANSAC algorithm we identified the locations of the artists when they drew the artworks and evaluated the accuracy of their final products. Finally, we re-projected 3D map digitized features onto the drawing canvases, thus embedding features not originally drawn. These were then identified, enabling potential extraction of the spatial information they may reflect. Numéro de notice : A2021-592 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1874957 Date de publication en ligne : 25/02/2021 En ligne : https://doi.org/10.1080/13658816.2021.1874957 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98213
in International journal of geographical information science IJGIS > vol 35 n° 9 (September 2021) . - pp 1703 - 1724[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 079-2021091 SL Revue Centre de documentation Revues en salle Disponible Digital camera calibration for cultural heritage documentation: the case study of a mass digitization project of religious monuments in Cyprus / Evagoras Evagorou in European journal of remote sensing, vol 54 sup 1 (2021)
[article]
Titre : Digital camera calibration for cultural heritage documentation: the case study of a mass digitization project of religious monuments in Cyprus Type de document : Article/Communication Auteurs : Evagoras Evagorou, Auteur ; Christodoulos Mettas, Auteur ; Athos Agapiou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] Agisoft Photoscan
[Termes IGN] auto-étalonnage
[Termes IGN] Chypre
[Termes IGN] distorsion d'image
[Termes IGN] données massives
[Termes IGN] édifice religieux
[Termes IGN] étalonnage d'instrument
[Termes IGN] patrimoine culturel
[Termes IGN] point d'appui
[Termes IGN] texture d'image
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DRésumé : (auteur) The paper summarizes the methodology followed, to evaluate the accuracy of different digitization methods of ecclesiastical monuments in 3D computer vision form and stresses the importance of photographic equipment calibration. In this study, a set of images were taken using the CANON EOS M5 digital camera, while the internal calibration parameters – horizontal and vertical focal length (fx, fy), principal point coordinates (x0, y0), radial distortion coefficients (K1, K2, K3), tangential distortion coefficients (P1, P2) and the affinity and the shear terms (b1, b2) were estimated. These parameters were calculated using different software applications and then analyzed. For the calibration procedure, 3D texture models were built with the Agisoft commercial software based on: (a) the aforementioned calibration parameters and (b) the self-calibration process. The overall accuracy (Root Mean Square – RMS) between these models, by comparing known geo-referenced ground-control-points (GCP) is presented through the Cloud Compare software. The results indicate that the internal calibration parameters of the digital camera used for documentation purposes are essential and should be systematically implemented for documentation purposes. Numéro de notice : A2021-816 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Atlas DOI : 10.1080/22797254.2020.1810131 Date de publication en ligne : 02/09/2020 En ligne : https://doi.org/10.1080/22797254.2020.1810131 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98902
in European journal of remote sensing > vol 54 sup 1 (2021) . - pp 6 - 17[article]A shape transformation-based dataset augmentation framework for pedestrian detection / Zhe Chen in International journal of computer vision, vol 129 n° 4 (April 2021)PermalinkA skyline-based approach for mobile augmented reality / Mehdi Ayadi in The Visual Computer, vol 37 n° 4 (April 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkUnsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)PermalinkPermalinkCartographie dense et compacte par vision RGB-D pour la navigation d’un robot mobile / Bruce Canovas (2021)PermalinkPermalinkDeep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)PermalinkExploration of reinforcement learning algorithms for autonomous vehicle visual perception and control / Florence Carton (2021)Permalink