Descripteur
Documents disponibles dans cette catégorie (48)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Tubelets : Unsupervised action proposals from spatiotemporal super-voxels / Mihir Jain in International journal of computer vision, vol 124 n° 3 (15 September 2017)
[article]
Titre : Tubelets : Unsupervised action proposals from spatiotemporal super-voxels Type de document : Article/Communication Auteurs : Mihir Jain, Auteur ; Jan van Gemert, Auteur ; Hervé Jégou, Auteur ; Patrick Bouthemy, Auteur ; Cees G. M. Snoek, Auteur Année de publication : 2017 Article en page(s) : pp 287 - 311 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] données spatiotemporelles
[Termes IGN] reconnaissance de gestes
[Termes IGN] rectangle englobant minimum
[Termes IGN] séquence d'images
[Termes IGN] voxelRésumé : (Auteur) This paper considers the problem of localizing actions in videos as sequences of bounding boxes. The objective is to generate action proposals that are likely to include the action of interest, ideally achieving high recall with few proposals. Our contributions are threefold. First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets. Second, along with the static features from individual frames our approach advantageously exploits motion. We introduce independent motion evidence as a feature to characterize how the action deviates from the background and explicitly incorporate such motion information in various stages of the proposal generation. Finally, we introduce spatiotemporal refinement of Tubelets, for more precise localization of actions, and pruning to keep the number of Tubelets limited. We demonstrate the suitability of our approach by extensive experiments for action proposal quality and action localization on three public datasets: UCF Sports, MSR-II and UCF101. For action proposal quality, our unsupervised proposals beat all other existing approaches on the three datasets. For action localization, we show top performance on both the trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of MSR-II. Numéro de notice : A2017-812 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-017-1023-9 En ligne : https://doi.org/10.1007/s11263-017-1023-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89252
in International journal of computer vision > vol 124 n° 3 (15 September 2017) . - pp 287 - 311[article]Multiple cues-based active contours for target contour tracking under sophisticated background / Peng Lv in The Visual Computer, vol 33 n°9 (September 2017)
[article]
Titre : Multiple cues-based active contours for target contour tracking under sophisticated background Type de document : Article/Communication Auteurs : Peng Lv, Auteur ; Qingjie Zhao, Auteur ; Yanming Chen, Auteur ; Liujun Zhao, Auteur Année de publication : 2017 Article en page(s) : pp 1103 - 1119 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] couleur (variable spectrale)
[Termes IGN] détection de contours
[Termes IGN] séquence d'images
[Termes IGN] texture d'image
[Termes IGN] traçage
[Termes IGN] vidéo numériqueRésumé : (auteur) In this paper, we propose a novel target contour tracking method under sophisticated background using the multiple cues-based active contour model. To locate the target position, a contour-based mean-shift tracker is designed which combines both color and texture information. To reduce the adverse impact of sophisticated background and also accelerate the curve motion, we propose a two-layer-based target appearance model that combines both discriminative pre-learned-based global layer and voting-based local layer. The proposed appearance model is able to extract rough target region from the complex background, which provides important target region information for our active contour model. We subsequently introduce a dynamical shape model to provide prior target shape information for more stable segmentation. To obtain accurate target boundaries, we design a new multiple cues-based active contour model which integrates with target edge, discriminative region, and shape information. The experimental results on 30 video sequences demonstrate that the proposed method outperforms other competitive contour tracking methods under various tracking environment. Numéro de notice : A2017-406 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-016-1268-2 En ligne : https://doi.org/10.1007/s00371-016-1268-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86286
in The Visual Computer > vol 33 n°9 (September 2017) . - pp 1103 - 1119[article]Self-calibration of omnidirectional multi-cameras including synchronization and rolling shutter / Thanh-Tin Nguyen in Computer Vision and image understanding, vol 162 (September 2017)
[article]
Titre : Self-calibration of omnidirectional multi-cameras including synchronization and rolling shutter Type de document : Article/Communication Auteurs : Thanh-Tin Nguyen, Auteur ; Maxime Lhuillier, Auteur Année de publication : 2017 Article en page(s) : pp 166 - 184 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] auto-étalonnage
[Termes IGN] compensation par faisceaux
[Termes IGN] image vidéo
[Termes IGN] séquence d'images
[Termes IGN] structure-from-motion
[Termes IGN] synchronisationRésumé : (auteur) 360° and spherical cameras become popular and are convenient for applications like immersive videos. They are often built by fixing together several fisheye cameras pointing in different directions. However their complete self-calibration is not easy since the consumer fisheyes are rolling shutter cameras which can be unsynchronized. Our approach does not require a calibration pattern. First the multi-camera model is initialized thanks to assumptions that are suitable to an omnidirectional camera without a privileged direction: the cameras have the same setting and are roughly equiangular. Second a frame-accurate synchronization is estimated from the instantaneous angular velocities of each camera provided by monocular structure-from-motion. Third both inter-camera poses and intrinsic parameters are refined using multi-camera structure-from-motion and bundle adjustment. Last we introduce a bundle adjustment that estimates not only the usual parameters but also a sub-frame-accurate synchronization and the rolling shutter. We experiment using videos taken by consumer cameras mounted on a helmet and moving along trajectories of several hundreds of meters or kilometers, and compare our results to ground truth. Numéro de notice : A2017-562 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.cviu.2017.08.010 En ligne : https://doi.org/10.1016/j.cviu.2017.08.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86643
in Computer Vision and image understanding > vol 162 (September 2017) . - pp 166 - 184[article]Motion priors based on goals hierarchies in pedestrian tracking applications / Francisco Madrigal in Machine Vision and Applications, vol 28 n° 3-4 (May 2017)
[article]
Titre : Motion priors based on goals hierarchies in pedestrian tracking applications Type de document : Article/Communication Auteurs : Francisco Madrigal, Auteur ; Jean-Bernard Hayet, Auteur Année de publication : 2017 Article en page(s) : pp 341 - 359 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] carrefour
[Termes IGN] compréhension de l'image
[Termes IGN] image vidéo
[Termes IGN] modèle de simulation
[Termes IGN] position
[Termes IGN] poursuite de cible
[Termes IGN] prévision
[Termes IGN] réalité de terrain
[Termes IGN] séquence d'imagesRésumé : (auteur) In this paper, the problem of automated scene understanding by tracking and predicting paths for multiple humans is tackled, with a new methodology using data from a single, fixed camera monitoring the environment. Our main idea is to build goal-oriented prior motion models that could drive both the tracking and path prediction algorithms, based on a coarse-to-fine modeling of the target goal. To implement this idea, we use a dataset of training video sequences with associated ground-truth trajectories and from which we extract hierarchically a set of key locations. These key locations may correspond to exit/entrance zones in the observed scene, or to crossroads where trajectories have often abrupt changes of direction. A simple heuristic allows us to make piecewise associations of the ground-truth trajectories to the key locations, and we use these data to learn one statistical motion model per key location, based on the variations of the trajectories in the training data and on a regularizing prior over the models spatial variations. We illustrate how to use these motion priors within an interacting multiple model scheme for target tracking and path prediction, and we finally evaluate this methodology with experiments on common datasets for tracking algorithms comparison. Numéro de notice : A2017-325 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00138-017-0832-8 Date de publication en ligne : 15/03/2017 En ligne : http://doi.org/10.1007/s00138-017-0832-8 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=85384
in Machine Vision and Applications > vol 28 n° 3-4 (May 2017) . - pp 341 - 359[article]Determining tree height and crown diameter from high-resolution UAV imagery / Dimitrios Panagiotidis in International Journal of Remote Sensing IJRS, vol 38 n° 8-10 (April 2017)
[article]
Titre : Determining tree height and crown diameter from high-resolution UAV imagery Type de document : Article/Communication Auteurs : Dimitrios Panagiotidis, Auteur ; Azadeh Abdollahnejad, Auteur ; Peter Surový, Auteur ; Vasco Chiteculo, Auteur Année de publication : 2017 Article en page(s) : pp 2392 - 2410 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] Betula pendula
[Termes IGN] hauteur des arbres
[Termes IGN] houppier
[Termes IGN] image aérienne
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] Larix decidua
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] Picea abies
[Termes IGN] Pinus sylvestris
[Termes IGN] reconstruction 3D
[Termes IGN] séquence d'images
[Termes IGN] structure-from-motion
[Vedettes matières IGN] Inventaire forestierRésumé : (auteur) Advances in computer vision and the parallel development of unmanned aerial vehicles (UAVs) allow for the extensive use of UAV in forest inventory and in indirect measurements of tree features. We used UAV-sensed high-resolution imagery through photogrammetry and Structure from Motion (SfM) to estimate tree heights and crown diameters. We reconstructed 3D structures from 2D image sequences for two study areas (25 × 25 m). Species composition for Plot 1 included Norway spruce (Picea abies L.) together with European larch (Larix decidua Mill.) and Scots pine (Pinus sylvestris L.), whereas Plot 2 was mainly Norway spruce and Scots pine together with scattered individuals of European larch and Silver birch (Betula pendula Roth.). The involved workflow used canopy height models (CHMs) for the extraction of height, the smoothing of raster images for the determination of the local maxima, and Inverse Watershed Segmentation (IWS) for the estimation of the crown diameters with the help of a geographical information system (GIS). Finally, we validated the accuracies of the two methods by comparing the UAV results with ground measurements. The results showed higher agreement between field and remote-sensed data for heights than for crown diameters based on RMSE%, which were in the range 11.42–12.62 for height and 14.29–18.56 for crown diameter. Overall, the accuracy of the results was acceptable and showed that the methods were feasible for detecting tree heights and crown diameter. Numéro de notice : A2017-683 Affiliation des auteurs : non IGN Thématique : FORET Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/01431161.2016.1264028 En ligne : http://dx.doi.org/10.1080/01431161.2016.1264028 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=87246
in International Journal of Remote Sensing IJRS > vol 38 n° 8-10 (April 2017) . - pp 2392 - 2410[article]A probabilistic approach to detect mixed periodic patterns from moving object data / Jun Li in Geoinformatica, vol 20 n° 4 (October - December 2016)PermalinkMeasurement of surface changes in a scaled-down landslide model using high-speed stereo image sequences / Tiantian Feng in Photogrammetric Engineering & Remote Sensing, PERS, vol 82 n° 7 (juillet 2016)PermalinkDeck and cable dynamic testing of a single-span bridge using radar interferometry and videometry measurements / George Piniotis in Journal of applied geodesy, vol 10 n° 1 (March 2016)PermalinkAdvanced spatio-temporal filtering techniques for photogrammetric image sequence analysis in civil engineering material testing / F. Liebold in ISPRS Journal of photogrammetry and remote sensing, vol 111 (January 2016)PermalinkAutomatic detection of clouds and shadows using high resolution satellite image time series / Nicolas Champion (2016)PermalinkPermalinkAnalysis on the dynamic deformations of the images from digital film sequences / Tomasz Markowski in Geodesy and cartography, vol 64 n° 1 (June 2015)PermalinkDetection of abrupt changes in spatial relationships in video sequences / Abdalbassir Abou-Elailah (2015)Permalinkvol II-3 W2 - November 2013 - WGIII/3 ISA13 – The ISPRS Workshop on Image Sequence Analysis 2013 [actes] (Bulletin de ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences) / Clément MalletPermalinkA new method for automatic large scale map updating using mobile mapping imagery / Jianliang Ou in Photogrammetric record, vol 28 n° 143 (September - November 2013)Permalink