Descripteur
Termes descripteurs IGN > informatique > intelligence artificielle > vision par ordinateur
vision par ordinateur |



Etendre la recherche sur niveau(x) vers le bas
Comparing pedestrians’ gaze behavior in desktop and in real environments / Weihua Dong in Cartography and Geographic Information Science, Vol 47 n° 5 (September 2020)
![]()
[article]
Titre : Comparing pedestrians’ gaze behavior in desktop and in real environments Type de document : Article/Communication Auteurs : Weihua Dong, Auteur ; Hua Liao, Auteur ; Bing Liu, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 432 - 451 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] analyse comparative
[Termes descripteurs IGN] analyse visuelle
[Termes descripteurs IGN] comportement
[Termes descripteurs IGN] espace urbain
[Termes descripteurs IGN] lecture de carte
[Termes descripteurs IGN] monde virtuel
[Termes descripteurs IGN] navigation pédestre
[Termes descripteurs IGN] oculométrie
[Termes descripteurs IGN] piéton
[Termes descripteurs IGN] test statistique
[Termes descripteurs IGN] travail
[Termes descripteurs IGN] vision par ordinateur
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) This research is motivated by the widespread use of desktop environments in the lab and by the recent trend of conducting real-world eye-tracking experiments to investigate pedestrian navigation. Despite the existing significant differences between the real world and the desktop environments, how pedestrians’ visual behavior in real environments differs from that in desktop environments is still not well understood. Here, we report a study that recorded eye movements for a total of 82 participants while they were performing five common navigation tasks in an unfamiliar urban environment (N = 39) and in a desktop environment (N = 43). By analyzing where the participants allocated their visual attention, what objects they fixated on, and how they transferred their visual attention among objects during navigation, we found similarities and significant differences in the general fixation indicators, spatial fixation distributions and attention to the objects of interest. The results contribute to the ongoing debate over the validity of using desktop environments to investigate pedestrian navigation by providing insights into how pedestrians allocate their attention to visual stimuli to accomplish navigation tasks in the two environments. Numéro de notice : A2020-488 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/15230406.2020.176251 date de publication en ligne : 29/05/2020 En ligne : https://doi.org/10.1080/15230406.2020.1762513 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95658
in Cartography and Geographic Information Science > Vol 47 n° 5 (September 2020) . - pp 432 - 451[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 032-2020051 SL Revue Centre de documentation Revues en salle Disponible A novel deep learning instance segmentation model for automated marine oil spill detection / Shamsudeen Temitope Yekeen in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
![]()
[article]
Titre : A novel deep learning instance segmentation model for automated marine oil spill detection Type de document : Article/Communication Auteurs : Shamsudeen Temitope Yekeen, Auteur ; Abdul‐Lateef Balogun, Auteur ; Khamaruzaman B. Wan Yusof, Auteur Année de publication : 2020 Article en page(s) : pp 190 - 200 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection automatique
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] hydrocarbure
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] marée noire
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] vision par ordinateur
[Termes descripteurs IGN] zone d'intérêtRésumé : (auteur) The visual similarity of oil slick and other elements, known as look-alike, affects the reliability of synthetic aperture radar (SAR) images for marine oil spill detection. So far, detection and discrimination of oil spill and look-alike are still limited to the use of traditional machine learning algorithms and semantic segmentation deep learning models with limited accuracy. Thus, this study developed a novel deep learning oil spill detection model using computer vision instance segmentation Mask-Region-based Convolutional Neural Network (Mask R-CNN) model. The model training was conducted using transfer learning on the ResNet 101 on COCO as backbone in combination with Feature Pyramid Network (FPN) architecture for feature extraction at 30 epochs with 0.001 learning rate. Testing of the model was conducted using the least training and validation loss value on the withheld testing images. The model’s performance was evaluated using precision, recall, specificity, IoU, F1-measure and overall accuracy values. Ship detection and segmentation had the highest performance with overall accuracy of 98.3%. The model equally showed a higher accuracy for oil spill and look-alike detection and segmentation although oil spill detection outperformed look-alike with overall accuracy values of 96.6% and 91.0% respectively. The study concluded that the deep learning instance segmentation model performs better than conventional machine learning models and deep learning semantic segmentation models in detection and segmentation. Numéro de notice : A2020-548 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.07.011 date de publication en ligne : 28/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.07.011 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95774
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 190 - 200[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 SL Revue Centre de documentation Revues en salle Disponible 081-2020093 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Water level prediction from social media images with a multi-task ranking approach / P. Chaudhary in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
![]()
[article]
Titre : Water level prediction from social media images with a multi-task ranking approach Type de document : Article/Communication Auteurs : P. Chaudhary, Auteur ; Stefano D'Aronco, Auteur ; João P. Leitão, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 252 - 262 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] inondation
[Termes descripteurs IGN] niveau hydrostatique
[Termes descripteurs IGN] régression
[Termes descripteurs IGN] réseau social
[Termes descripteurs IGN] surveillance hydrologique
[Termes descripteurs IGN] vision par ordinateurRésumé : (auteur) Floods are among the most frequent and catastrophic natural disasters and affect millions of people worldwide. It is important to create accurate flood maps to plan (offline) and conduct (real-time) flood mitigation and flood rescue operations. Arguably, images collected from social media can provide useful information for that task, which would otherwise be unavailable. We introduce a computer vision system that estimates water depth from social media images taken during flooding events, in order to build flood maps in (near) real-time. We propose a multi-task (deep) learning approach, where a model is trained using both a regression and a pairwise ranking loss. Our approach is motivated by the observation that a main bottleneck for image-based flood level estimation is training data: it is difficult and requires a lot of effort to annotate uncontrolled images with the correct water depth. We demonstrate how to efficiently learn a predictor from a small set of annotated water levels and a larger set of weaker annotations that only indicate in which of two images the water level is higher, and are much easier to obtain. Moreover, we provide a new dataset, named DeepFlood, with 8145 annotated ground-level images, and show that the proposed multi-task approach can predict the water level from a single, crowd-sourced image with 11 cm root mean square error. Numéro de notice : A2020-549 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.07.003 date de publication en ligne : 29/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.07.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95776
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 252 - 262[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 SL Revue Centre de documentation Revues en salle Disponible 081-2020093 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Indoor positioning using PnP problem on mobile phone images / Hana Kubickova in ISPRS International journal of geo-information, vol 9 n° 6 (June 2020)
![]()
[article]
Titre : Indoor positioning using PnP problem on mobile phone images Type de document : Article/Communication Auteurs : Hana Kubickova, Auteur ; Karel Jedlička, Auteur ; Radek Fiala, Auteur ; Daniel Beran, Auteur Année de publication : 2020 Article en page(s) : 19 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] base de données d'images
[Termes descripteurs IGN] décomposition d'image
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] géométrie épipolaire
[Termes descripteurs IGN] GNSS-INS
[Termes descripteurs IGN] point d'appui
[Termes descripteurs IGN] positionnement en intérieur
[Termes descripteurs IGN] recherche d'image basée sur le contenu
[Termes descripteurs IGN] SIFT (algorithme)
[Termes descripteurs IGN] téléphone intelligent
[Termes descripteurs IGN] vision par ordinateurRésumé : (auteur) As people grow accustomed to effortless outdoor navigation, there is a rising demand for similar possibilities indoors as well. Unfortunately, indoor localization, being one of the requirements for navigation, continues to be a problem without a clear solution. In this article, we are proposing a method for an indoor positioning system using a single image. This is made possible using a small preprocessed database of images with known control points as the only preprocessing needed. Using feature detection with the SIFT (Scale Invariant Feature Transform) algorithm, we can look through the database and find an image that is the most similar to the image taken by a user. Such a pair of images is then used to find coordinates of a database of images using the PnP problem. Furthermore, projection and essential matrices are determined to calculate the user image localization—determining the position of the user in the indoor environment. The benefits of this approach lie in the single image being the only input from a user and the lack of requirements for new onsite infrastructure. Thus, our approach enables a more straightforward realization for building management. Numéro de notice : A2020-309 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9060368 date de publication en ligne : 02/06/2020 En ligne : https://doi.org/10.3390/ijgi9060368 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95156
in ISPRS International journal of geo-information > vol 9 n° 6 (June 2020) . - 19 p.[article]Automatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)
![]()
[article]
Titre : Automatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks Type de document : Article/Communication Auteurs : Mahmoud Saeedimoghaddam, Auteur ; Tomasz F. Stepinski, Auteur Année de publication : 2020 Article en page(s) : pp 947 - 968 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] carrefour
[Termes descripteurs IGN] carte ancienne
[Termes descripteurs IGN] carte numérisée
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] données localisées
[Termes descripteurs IGN] Etats-Unis
[Termes descripteurs IGN] extraction du réseau routier
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] numérisation automatique
[Termes descripteurs IGN] représentation cartographique
[Termes descripteurs IGN] système d'information géographique
[Termes descripteurs IGN] vision par ordinateurRésumé : (auteur) Road intersection data have been used across a range of geospatial analyses. However, many datasets dating from before the advent of GIS are only available as historical printed maps. To be analyzed by GIS software, they need to be scanned and transformed into a usable (vector-based) format. Because the number of scanned historical maps is voluminous, automated methods of digitization and transformation are needed. Frequently, these processes are based on computer vision algorithms. However, the key challenges to this are (1) the low conversion accuracy for low quality and visually complex maps, and (2) the selection of optimal parameters. In this paper, we used a region-based deep convolutional neural network-based framework (RCNN) for object detection, in order to automatically identify road intersections in historical maps of several cities in the United States of America. We found that the RCNN approach is more accurate than traditional computer vision algorithms for double-line cartographic representation of the roads, though its accuracy does not surpass all traditional methods used for single-line symbols. The results suggest that the number of errors in the outputs is sensitive to complexity and blurriness of the maps, and to the number of distinct red-green-blue (RGB) combinations within them. Numéro de notice : A2020-205 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2019.1696968 date de publication en ligne : 28/11/2019 En ligne : https://doi.org/10.1080/13658816.2019.1696968 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94882
in International journal of geographical information science IJGIS > vol 34 n° 5 (May 2020) . - pp 947 - 968[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 079-2020051 SL Revue Centre de documentation Revues en salle Disponible Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding / Weihua Dong in Cartography and Geographic Information Science, vol 47 n° 3 (May 2020)
PermalinkAdaptive Statistical Superpixel Merging With Edge Penalty for PolSAR Image Segmentation / Deliang Xiang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)
PermalinkStreet-Frontage-Net: urban image classification using deep convolutional neural networks / Stephen Law in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)
PermalinkComputer vision-based framework for extracting tectonic lineaments from optical remote sensing data / Ehsan Farahbakhsh in International Journal of Remote Sensing IJRS, vol 41 n°5 (01 - 08 février 2020)
PermalinkAdvances in Intelligent Data Analysis XVIII : 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27–29 2020 / Michael R. Berthold (2020)
PermalinkApplication of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)
PermalinkContext-aware convolutional neural network for object detection in VHR remote sensing imagery / Yiping Gong in IEEE Transactions on geoscience and remote sensing, vol 58 n° 1 (January 2020)
PermalinkPermalinkPermalinkPermalink