Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > photogrammétrie > photogrammétrie numérique > structure-from-motion
structure-from-motionVoir aussi |
Documents disponibles dans cette catégorie (67)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Refractive two-view reconstruction for underwater 3D vision / François Chadebecq in International journal of computer vision, vol 128 n° 5 (May 2020)
[article]
Titre : Refractive two-view reconstruction for underwater 3D vision Type de document : Article/Communication Auteurs : François Chadebecq, Auteur ; Francisco Vasconcelos, Auteur ; René Lacher, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 1101 - 1117 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] correction d'image
[Termes IGN] estimation de pose
[Termes IGN] étalonnage d'instrument
[Termes IGN] image sous-marine
[Termes IGN] reconstruction 3D
[Termes IGN] réfraction de l'eau
[Termes IGN] structure-from-motion
[Termes IGN] temps de pose
[Termes IGN] vision stéréoscopiqueRésumé : (auteur) Recovering 3D geometry from cameras in underwater applications involves the Refractive Structure-from-Motion problem where the non-linear distortion of light induced by a change of medium density invalidates the single viewpoint assumption. The pinhole-plus-distortion camera projection model suffers from a systematic geometric bias since refractive distortion depends on object distance. This leads to inaccurate camera pose and 3D shape estimation. To account for refraction, it is possible to use the axial camera model or to explicitly consider one or multiple parallel refractive interfaces whose orientations and positions with respect to the camera can be calibrated. Although it has been demonstrated that the refractive camera model is well-suited for underwater imaging, Refractive Structure-from-Motion remains particularly difficult to use in practice when considering the seldom studied case of a camera with a flat refractive interface. Our method applies to the case of underwater imaging systems whose entrance lens is in direct contact with the external medium. By adopting the refractive camera model, we provide a succinct derivation and expression for the refractive fundamental matrix and use this as the basis for a novel two-view reconstruction method for underwater imaging. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate its practical application within laboratory settings and for medical applications in fluid-immersed endoscopy. We demonstrate our approach outperforms classic two-view Structure-from-Motion method relying on the pinhole-plus-distortion camera model. Numéro de notice : A2020-508 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-019-01218-9 Date de publication en ligne : 18/11/2019 En ligne : https://doi.org/10.1007/s11263-019-01218-9 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96972
in International journal of computer vision > vol 128 n° 5 (May 2020) . - pp 1101 - 1117[article]Efficient match pair selection for oblique UAV images based on adaptive vocabulary tree / San Jiang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
[article]
Titre : Efficient match pair selection for oblique UAV images based on adaptive vocabulary tree Type de document : Article/Communication Auteurs : San Jiang, Auteur ; Wanshou Jiang, Auteur Année de publication : 2020 Article en page(s) : pp 61 - 75 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] analyse des correspondances
[Termes IGN] appariement d'images
[Termes IGN] image aérienne oblique
[Termes IGN] image captée par drone
[Termes IGN] photogrammétrie aérienne
[Termes IGN] seuillage d'image
[Termes IGN] structure-from-motionRésumé : (Auteur) The primary contribution of this paper is an efficient match pair selection method for oblique unmanned aerial vehicle (UAV) images. First, high overlap degrees and spatial resolutions cause image and feature redundancies in vocabulary tree building and image indexing. To cope with this issue, an image selection strategy and a feature selection strategy are designed to decrease the total number of features. Second, by analysing the distribution of the similarity scores, an adaptive threshold selection method is implemented to determine the number of candidate match pairs for each query image, and it avoids the disadvantages of the fixed number and fixed proportion methods. Then, an algorithm, termed AVT-Expansion, is proposed for the match pair selection and simplification where the initial match pairs are first selected by using the adaptive vocabulary tree (AVT). To simplify the initial match pairs, the AVT method is integrated with our previous MST-Expansion algorithm, which is used to extract a match graph by analysing the image topological connection network. Finally, the proposed method is verified using three UAV datasets captured with different oblique multi-camera systems. Experimental results demonstrate that the efficiency of the vocabulary tree building is improved, with speed-up ratios ranging from 14 to 16, and that high image retrieval precision values are obtained for the three datasets. For match pair selection of oblique UAV images, the proposed method is an efficient solution. Numéro de notice : A2020-062 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.013 Date de publication en ligne : 15/01/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.013 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94578
in ISPRS Journal of photogrammetry and remote sensing > vol 161 (March 2020) . - pp 61 - 75[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Integration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model / Nadeem Fareed in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)
[article]
Titre : Integration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model Type de document : Article/Communication Auteurs : Nadeem Fareed, Auteur ; Khushbakht Rehman, Auteur Année de publication : 2020 Article en page(s) : 26 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] agriculture de précision
[Termes IGN] données GNSS
[Termes IGN] données lidar
[Termes IGN] extraction automatique
[Termes IGN] extraction de la végétation
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] modèle dynamique
[Termes IGN] modèle numérique de surface
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] système d'information géographique
[Termes IGN] télédétectionRésumé : (auteur) Automated feature extraction from drone-based image point clouds (DIPC) is of paramount importance in precision agriculture (PA). PA is blessed with mechanized row seedlings to attain maximum yield and best management practices. Therefore, automated plantation rows extraction is essential in crop harvesting, pest management, and plant grow-rate predictions. Most of the existing research is consists on red, green, and blue (RGB) image-based solutions to extract plantation rows with the minimal background noise of test study sites. DIPC-based DSM row extraction solutions have not been tested frequently. In this research work, an automated method is designed to extract plantation row from DIPC-based DSM. The chosen plantation compartments have three different levels of background noise in UAVs images, therefore, methodology was tested under different background noises. The extraction results were quantified in terms of completeness, correctness, quality, and F1-score values. The case study revealed the potential of DIPC-based solution to extraction the plantation rows with an F1-score value of 0.94 for a plantation compartment with minimal background noises, 0.91 value for a highly noised compartment, and 0.85 for a compartment where DIPC was compromised. The evaluation suggests that DSM-based solutions are robust as compared to RGB image-based solutions to extract plantation-rows. Additionally, DSM-based solutions can be further extended to assess the plantation rows surface deformation caused by humans and machines and state-of-the-art is redefined. Numéro de notice : A2020-260 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9030151 Date de publication en ligne : 06/03/2020 En ligne : https://doi.org/10.3390/ijgi9030151 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95020
in ISPRS International journal of geo-information > vol 9 n° 3 (March 2020) . - 26 p.[article]Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering / Shangpeng Sun in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
[article]
Titre : Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering Type de document : Article/Communication Auteurs : Shangpeng Sun, Auteur ; Changying Li, Auteur ; Peng Wah Chee, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 195 - 207 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] cartographie 3D
[Termes IGN] classification basée sur les régions
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de la végétation
[Termes IGN] gestion de production
[Termes IGN] Gossypium (genre)
[Termes IGN] phénologie
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] surveillance de la végétationRésumé : (Auteur) Three-dimensional high throughput plant phenotyping techniques provide an opportunity to measure plant organ-level traits which can be highly useful to plant breeders. The number and locations of cotton bolls, which are the fruit of cotton plants and an important component of fiber yield, are arguably among the most important phenotypic traits but are complex to quantify manually. Hence, there is a need for effective and efficient cotton boll phenotyping solutions to support breeding research and monitor the crop yield leading to better production management systems. We developed a novel methodology for 3D cotton boll mapping within a plot in situ. Point clouds were reconstructed from multi-view images using the structure from motion algorithm. The method used a region-based classification algorithm that successfully accounted for noise due to sunlight. The developed density-based clustering method could estimate boll counts for this situation, in which bolls were in direct contact with other bolls. By applying the method to point clouds from 30 plots of cotton plants, boll counts, boll volume and position data were derived. The average accuracy of boll counting was up to 90% and the R2 values between fiber yield and boll number, as well as fiber yield and boll volume were 0.87 and 0.66, respectively. The 3D boll spatial distribution could also be analyzed using this method. This method, which was low-cost and provided improved site-specific data on cotton bolls, can also be applied to other plant/fruit mapping analysis after some modification. Numéro de notice : A2020-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.011 Date de publication en ligne : 25/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94561
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 195 - 207[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Automatic scale estimation of structure from motion based 3D models using laser scalers in underwater scenarios / Klemen Istenič in ISPRS Journal of photogrammetry and remote sensing, vol 159 (January 2020)
[article]
Titre : Automatic scale estimation of structure from motion based 3D models using laser scalers in underwater scenarios Type de document : Article/Communication Auteurs : Klemen Istenič, Auteur ; Nuno Gracias, Auteur ; Aurélien Arnaubec, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 13 - 25 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] estimation de pose
[Termes IGN] étalonnage
[Termes IGN] faisceau laser
[Termes IGN] image à haute résolution
[Termes IGN] image sous-marine
[Termes IGN] photogrammétrie sous-marine
[Termes IGN] Ransac (algorithme)
[Termes IGN] reconstruction 3D
[Termes IGN] structure-from-motionRésumé : (Auteur) Improvements in structure-from-motion techniques are enabling many scientific fields to benefit from the routine creation of detailed 3D models. However, for a large number of applications, only a single camera is available for the image acquisition, due to cost or space constraints in the survey platforms. Monocular structure-from-motion raises the issue of properly estimating the scale of the 3D models, in order to later use those models for metrology. The scale can be determined from the presence of visible objects of known dimensions, or from information on the magnitude of the camera motion provided by other sensors, such as GPS. This paper addresses the problem of accurately scaling 3D models created from monocular cameras in GPS-denied environments, such as in underwater applications. Motivated by the common availability of underwater laser scalers, we present two novel approaches which are suitable for different laser scaler configurations. A fully unconstrained method enables the use of arbitrary laser setups, while a partially constrained method reduces the need for calibration by only assuming parallelism on the laser beams and equidistance with the camera. The proposed methods have several advantages with respect to existing methods. By using the known geometry of the scene represented by the 3D model, along with some parameters of the laser scaler geometry, the need for laser alignment with the optical axis of the camera is eliminated. Furthermore, the extremely error-prone manual identification of image points on the 3D model, currently required in image-scaling methods, is dispensed with. The performance of the methods and their applicability was evaluated both on data generated from a realistic 3D model and on data collected during an oceanographic cruise in 2017. Three separate laser configurations have been tested, encompassing nearly all possible laser setups, to evaluate the effects of terrain roughness, noise, camera perspective angle and camera-scene distance on the final estimates of scale. In the real scenario, the computation of 6 independent model scale estimates using our fully unconstrained approach, produced values with a standard deviation of 0,3 %. By comparing the values to the only other possible method currently usable for this dataset, we showed that the consistency of scales obtained for individual lasers is much higher for our approach (0,6 % compared to 4 %). Numéro de notice : A2020-010 Affiliation des auteurs : non IGN Thématique : IMAGERIE/POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.10.007 Date de publication en ligne : 14/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.10.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94397
in ISPRS Journal of photogrammetry and remote sensing > vol 159 (January 2020) . - pp 13 - 25[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020013 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020012 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Fusion d'approches photométriques et géométriques pour la création de modèles 3D / Jean Mélou (2020)PermalinkNew quantitative indices from 3D modeling by photogrammetry to monitor coral reef environments / Isabel Urbina-Barreto (2020)PermalinkStreambank topography: an accuracy assessment of UAV-based and traditional 3D reconstructions / Benjamin U. Meinen in International Journal of Remote Sensing IJRS, vol 41 n° 1 (01 - 08 janvier 2020)PermalinkApplication of photogrammetry to generate quantitative geobody data in ephemeral fluvial systems / Charlotte L. Priddy in Photogrammetric record, vol 34 n° 168 (December 2019)PermalinkInnovative techniques of photogrammetry for 3D modeling / Vicenzo Barrile in Applied geomatics, Vol 11 n° 4 (December 2019)PermalinkA low‐cost open‐source workflow to generate georeferenced 3D SfM photogrammetric models of rocky outcrops / Laurent Froideval in Photogrammetric record, vol 34 n° 168 (December 2019)PermalinkPré-localisation des données pour la modélisation 3D de tunnels : développements et évaluations / Christophe Heinkelé in Revue Française de Photogrammétrie et de Télédétection, n° 221 (novembre 2019)PermalinkEnhanced 3D mapping with an RGB-D sensor via integration of depth measurements and image sequences / Bo Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)PermalinkAnalysis of free image-based modelling systems applied to support topographic measurements / José Miguel Caldera-Cordero in Survey review, vol 51 n° 367 (July 2019)PermalinkRobust structure from motion based on relative rotations and tie points / Xin Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 5 (May 2019)Permalink