Descripteur
Documents disponibles dans cette catégorie (1844)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A minimal solution for image-based sphere estimation / Tekla Tóth in International journal of computer vision, vol 131 n° 6 (June 2023)
[article]
Titre : A minimal solution for image-based sphere estimation Type de document : Article/Communication Auteurs : Tekla Tóth, Auteur ; Levente Hajder, Auteur Année de publication : 2023 Article en page(s) : pp 1428 - 1447 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de Levenberg-Marquardt
[Termes IGN] cône
[Termes IGN] ellipse
[Termes IGN] Matlab
[Termes IGN] reconstruction d'image
[Termes IGN] représentation géométrique
[Termes IGN] sphère
[Termes IGN] sphère paramétriqueRésumé : (auteur) We propose a novel minimal solver for sphere fitting via its 2D central projection, i.e., a special ellipse. The input of the presented algorithm consists of contour points detected in a camera image. General ellipse fitting problems require five contour points. However, taking advantage of the isotropic spherical target, three points are enough to define the tangent cone parameters of the sphere. This yields the sought ellipse parameters. Similarly, the sphere center can be estimated from the cone if the radius is known. These proposed geometric methods are rapid, numerically stable, and easy to implement. Experimental results—on synthetic, photorealistic, and real images—showcase the superiority of the proposed solutions to the state-of-the-art methods. A real-world LiDAR-camera calibration application justifies the utility of the sphere-based approach resulting in an error below a few centimeters. Numéro de notice : A2023-189 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-023-01766-1 Date de publication en ligne : 02/03/2023 En ligne : https://doi.org/10.1007/s11263-023-01766-1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103061
in International journal of computer vision > vol 131 n° 6 (June 2023) . - pp 1428 - 1447[article]Deblurring low-light images with events / Chu Zhou in International journal of computer vision, vol 131 n° 5 (May 2023)
[article]
Titre : Deblurring low-light images with events Type de document : Article/Communication Auteurs : Chu Zhou, Auteur ; Minggui Teng, Auteur ; Jin Han, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 1284 - 1298 Note générale : bilbiographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] caméra d'événement
[Termes IGN] correction d'image
[Termes IGN] filtrage du bruit
[Termes IGN] flou
[Termes IGN] image à basse résolution
[Termes IGN] image RVBRésumé : (auteur) Modern image-based deblurring methods usually show degenerate performance in low-light conditions since the images often contain most of the poorly visible dark regions and a few saturated bright regions, making the amount of effective features that can be extracted for deblurring limited. In contrast, event cameras can trigger events with a very high dynamic range and low latency, which hardly suffer from saturation and naturally encode dense temporal information about motion. However, in low-light conditions existing event-based deblurring methods would become less robust since the events triggered in dark regions are often severely contaminated by noise, leading to inaccurate reconstruction of the corresponding intensity values. Besides, since they directly adopt the event-based double integral model to perform pixel-wise reconstruction, they can only handle low-resolution grayscale active pixel sensor images provided by the DAVIS camera, which cannot meet the requirement of daily photography. In this paper, to apply events to deblurring low-light images robustly, we propose a unified two-stage framework along with a motion-aware neural network tailored to it, reconstructing the sharp image under the guidance of high-fidelity motion clues extracted from events. Besides, we build an RGB-DAVIS hybrid camera system to demonstrate that our method has the ability to deblur high-resolution RGB images due to the natural advantages of our two-stage framework. Experimental results show our method achieves state-of-the-art performance on both synthetic and real-world images. Numéro de notice : A2023-210 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-023-01754-5 Date de publication en ligne : 06/02/2023 En ligne : https://doi.org/10.1007/s11263-023-01754-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103062
in International journal of computer vision > vol 131 n° 5 (May 2023) . - pp 1284 - 1298[article]Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images / Ziyao Xing in Sustainable Cities and Society, vol 92 (May 2023)
[article]
Titre : Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images Type de document : Article/Communication Auteurs : Ziyao Xing, Auteur ; Shuai Yang, Auteur ; Xuli Zan, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 104467 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] bâtiment
[Termes IGN] Chine
[Termes IGN] gestion des risques
[Termes IGN] image Streetview
[Termes IGN] inondation
[Termes IGN] milieu urbain
[Termes IGN] planification urbaine
[Termes IGN] Quickbird
[Termes IGN] segmentation sémantique
[Termes IGN] vulnérabilitéRésumé : (auteur) Urban flood risk management requires an extensive investigation of the vulnerability characteristics of buildings. Large-scale field surveys usually cost a lot of time and money, while satellite remote sensing and street view images can provide information on the tops and facades of buildings respectively. Thereupon, this paper develops a building vulnerability assessment framework using remote sensing and street view features. Specifically, a UNet-based semantic segmentation model, FSA-UNet (Fusion-Self-Attention-UNet) is proposed to integrate remote sensing and street view features and the vulnerability information contained in the images is fully exploited. And the building vulnerability index is generated to provide the spatial distribution characteristics of urban building vulnerability. The experiment shows that the mIoU of the proposed model can reach 82% for building vulnerability classification in Hefei, China, which is more accurate than the traditional semantic segmentation models. The results indicate that the integration of street view and remote sensing image features can improve the ability of building vulnerability assessment, and the model proposed in this study can better capture the correlation features of multi-angle images through the self-attention mechanism and combines hierarchy features and edge information to improve the classification effect. This study can support for disaster management and urban planning. Numéro de notice : A2023-152 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.scs.2023.104467 Date de publication en ligne : 23/02/2023 En ligne : https://doi.org/10.1016/j.scs.2023.104467 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102826
in Sustainable Cities and Society > vol 92 (May 2023) . - n° 104467[article]Global-aware siamese network for change detection on remote sensing images / Ruiqian Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 199 (May 2023)
[article]
Titre : Global-aware siamese network for change detection on remote sensing images Type de document : Article/Communication Auteurs : Ruiqian Zhang, Auteur ; Hanchao Zhang, Auteur ; Xiaogang Ning, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 61 - 72 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] optimisation (mathématiques)
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Change detection (CD) in remote sensing images is one of the most important technical options to identify changes in observations in an efficient manner. CD has a wide range of applications, such as land use investigation, urban planning, environmental monitoring and disaster mapping. However, the frequently occurring class imbalance problem brings huge challenges to the change detection applications. To address this issue, we develop a novel global-aware siamese network (GAS-Net), aiming to generate global-aware features for efficient change detection by incorporating the relationships between scenes and foregrounds. The proposed GAS-Net, consisting of the global-attention module (GAM) and foreground-awareness module (FAM) that both learns contextual relationships and enhances symbiotic relation learning between scene and foreground. The experimental results demonstrate the effectiveness and robustness of the proposed GAS-Net, achieving up to 91.21% and 95.84% F1 score on two widely used public datasets, i.e., Levir-CD and Lebedev-CD dataset. The source code is available at https://github.com/xiaoxiangAQ/GAS-Net. Numéro de notice : 2023-204 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2023.04.001 Date de publication en ligne : 05/04/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.04.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103106
in ISPRS Journal of photogrammetry and remote sensing > vol 199 (May 2023) . - pp 61 - 72[article]Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery / Maryam Hosseini in Computers, Environment and Urban Systems, vol 101 (April 2023)
[article]
Titre : Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery Type de document : Article/Communication Auteurs : Maryam Hosseini, Auteur ; Andres Sevtsuk, Auteur ; Fabio Miranda, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 101950 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection d'objet
[Termes IGN] Etats-Unis
[Termes IGN] image aérienne
[Termes IGN] navigation pédestre
[Termes IGN] segmentation sémantique
[Termes IGN] système d'information géographique
[Termes IGN] trottoir
[Termes IGN] vision par ordinateurRésumé : (auteur) While cities around the world are increasingly promoting streets and public spaces that prioritize pedestrians over vehicles, significant data gaps have made pedestrian mapping, analysis, and modeling challenging to carry out. Most cities, even in industrialized economies, still lack information about the location and connectivity of their sidewalks, making it difficult to implement research on pedestrian infrastructure and holding the technology industry back from developing accurate, location-based Apps for pedestrians, wheelchair users, street vendors, and other sidewalk users. To address this gap, we have designed and implemented an end-to-end open-source tool— Tile2Net —for extracting sidewalk, crosswalk, and footpath polygons from orthorectified aerial imagery using semantic segmentation. The segmentation model, trained on aerial imagery from Cambridge, MA, Washington DC, and New York City, offers the first open-source scene classification model for pedestrian infrastructure from sub-meter resolution aerial tiles, which can be used to generate planimetric sidewalk data in North American cities. Tile2Net also generates pedestrian networks from the resulting polygons, which can be used to prepare datasets for pedestrian routing applications. The work offers a low-cost and scalable data collection methodology for systematically generating sidewalk network datasets, where orthorectified aerial imagery is available, contributing to over-due efforts to equalize data opportunities for pedestrians, particularly in cities that lack the resources necessary to collect such data using more conventional methods. Numéro de notice : A2023-187 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.compenvurbsys.2023.101950 Date de publication en ligne : 22/02/2023 En ligne : https://doi.org/10.1016/j.compenvurbsys.2023.101950 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102961
in Computers, Environment and Urban Systems > vol 101 (April 2023) . - n° 101950[article]Towards global scale segmentation with OpenStreetMap and remote sensing / Munazza Usmani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 8 (April 2023)PermalinkDomain adaptation in segmenting historical maps: A weakly supervised approach through spatial co-occurrence / Sidi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 197 (March 2023)PermalinkMultiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times / Peng Wang in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)PermalinkA unified attention paradigm for hyperspectral image classification / Qian Liu in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)PermalinkComparative analysis of different CNN models for building segmentation from satellite and UAV images / Batuhan Sariturk in Photogrammetric Engineering & Remote Sensing, PERS, vol 89 n° 2 (February 2023)PermalinkGenerating Sentinel-2 all-band 10-m data by sharpening 20/60-m bands: A hierarchical fusion network / Jingan Wu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)PermalinkA CNN based approach for the point-light photometric stereo problem / Fotios Logothetis in International journal of computer vision, vol 131 n° 1 (January 2023)PermalinkCross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)PermalinkPermalinkForest road extraction from orthophoto images by convolutional neural networks / Erhan Çalişkan in Geocarto international, vol 38 n° inconnu ([01/01/2023])Permalink