Détail de l'autorité
ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3
Autorités liées :
nom du congrès :
ISPRS 2016, Commission 3, 23th international congress
début du congrès :
12/07/2016
fin du congrès :
19/07/2016
ville du congrès :
Prague
pays du congrès :
République tchèque
site des actes du congrès :
|
Documents disponibles (6)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Titre de série : 23th ISPRS international congress, 3 Titre : Commission III Type de document : Actes de congrès Auteurs : Lena Halounova, Éditeur scientifique ; Konrad Schindler, Éditeur scientifique ; A. Limpouch, Éditeur scientifique ; T. Pajdla, Éditeur scientifique ; V. Safar, Éditeur scientifique ; Helmut Mayer, Éditeur scientifique ; Sander J. Oude Elberink, Éditeur scientifique ; Clément Mallet , Éditeur scientifique ; Franz Rottensteiner, Éditeur scientifique ; Mathieu Brédif , Éditeur scientifique ; Jan Skaloud, Éditeur scientifique ; Uwe Stilla, Éditeur scientifique Congrès : ISPRS 2016, 23th international congress (12 - 19 juillet 2016; Prague, République tchèque), Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2016 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 41-B3 Conférence : ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3 Langues : Anglais (eng) Numéro de notice : 17375C Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Actes nature-HAL : DirectOuvrColl/Actes Date de publication en ligne : 09/06/2016 En ligne : http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B3/index.html Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91850 Contient
- Automatic detection of clouds and shadows using high resolution satellite image time series / Nicolas Champion (2016)
- Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery / Clément Dechesne (2016)
- Evaluation of SIFT and SURF for vision based localization / Xiaozhi Qu (2016)
- Uncertainty propagation for terrestrial mobile laser scanner / Miloud Mezian (2016)
- The iQmulus urban showcase: automatic tree classification and identification in huge mobile mapping point clouds / Jan Böhm (2016)
Automatic detection of clouds and shadows using high resolution satellite image time series / Nicolas Champion (2016)
Titre : Automatic detection of clouds and shadows using high resolution satellite image time series Type de document : Article/Communication Auteurs : Nicolas Champion , Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2016 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 41-B3 Projets : 1-Pas de projet / Conférence : ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3 Importance : pp 475 - 479 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection d'ombre
[Termes IGN] détection des nuages
[Termes IGN] image Landsat-8
[Termes IGN] image Pléiades-HR
[Termes IGN] orthoimage
[Termes IGN] réflectance de surface
[Termes IGN] séquence d'images
[Termes IGN] série temporelleRésumé : (auteur) Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences. Numéro de notice : C2016-038 Affiliation des auteurs : IGN (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLI-B3-475-2016 Date de publication en ligne : 09/06/2016 En ligne : http://dx.doi.org/10.5194/isprs-archives-XLI-B3-475-2016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91849
Titre : Evaluation of SIFT and SURF for vision based localization Type de document : Article/Communication Auteurs : Xiaozhi Qu , Auteur ; Bahman Soheilian , Auteur ; Emmanuel Habets , Auteur ; Nicolas Paparoditis , Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2016 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 41-B3 Conférence : ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3 Importance : pp 685 - 692 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compensation locale par faisceaux
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] localisation basée vision
[Termes IGN] point d'intérêt
[Termes IGN] SIFT (algorithme)
[Termes IGN] SURF (algorithme)Résumé : (auteur) Vision based localization is widely investigated for the autonomous navigation and robotics. One of the basic steps of vision based localization is the extraction of interest points in images that are captured by the embedded camera. In this paper, SIFT and SURF extractors were chosen to evaluate their performance in localization. Four street view image sequences captured by a mobile mapping system, were used for the evaluation and both SIFT and SURF were tested on different image scales. Besides, the impact of the interest point distribution was also studied. We evaluated the performances from for aspects: repeatability, precision, accuracy and runtime. The local bundle adjustment method was applied to refine the pose parameters and the 3D coordinates of tie points. According to the results of our experiments, SIFT was more reliable than SURF. Apart from this, both the accuracy and the efficiency of localization can be improved if the distribution of feature points are well constrained for SIFT. Numéro de notice : C2016-039 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLI-B3-685-2016 Date de publication en ligne : 10/06/2016 En ligne : https://doi.org/10.5194/isprs-archives-XLI-B3-685-2016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91851 Documents numériques
en open access
Evaluation of SIFT and SURF ... - pdf éditeurAdobe Acrobat PDF Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery / Clément Dechesne (2016)
Titre : Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery Type de document : Article/Communication Auteurs : Clément Dechesne , Auteur ; Clément Mallet , Auteur ; Arnaud Le Bris , Auteur ; Valérie Gouet-Brunet , Auteur ; Alexandre Hervieu , Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2016 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 41-B3 Projets : 1-Pas de projet / Conférence : ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3 Importance : pp 207 - 214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme Graph-Cut
[Termes IGN] classification dirigée
[Termes IGN] détection d'arbres
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image à très haute résolution
[Termes IGN] image multibande
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] semis de pointsRésumé : (auteur) Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%). Numéro de notice : C2016-040 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : FORET/IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLI-B3-207-2016 Date de publication en ligne : 09/06/2016 En ligne : https://doi.org/10.5194/isprs-archives-XLI-B3-207-2016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91852 Documents numériques
en open access
Forest stand segmentation ... - pdf éditeurAdobe Acrobat PDF The iQmulus urban showcase: automatic tree classification and identification in huge mobile mapping point clouds / Jan Böhm (2016)
Titre : The iQmulus urban showcase: automatic tree classification and identification in huge mobile mapping point clouds Type de document : Article/Communication Auteurs : Jan Böhm , Auteur ; Mathieu Brédif , Auteur ; T. Gierlinger, Auteur ; M. Krämer, Auteur ; R.E. Lindenberg, Auteur ; K. Liu, Auteur ; F. Michel, Auteur ; B. Sirmacek, Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2016 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 41-B3 Projets : IQmulus / Métral, Claudine Conférence : ISPRS 2016, Commission 3, 23th international congress 12/07/2016 19/07/2016 Prague République tchèque ISPRS OA Archives Commission 3 Importance : pp 301 - 307 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse en composantes principales
[Termes IGN] arbre urbain
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] données massives
[Termes IGN] semis de points
[Termes IGN] Spark
[Termes IGN] Toulouse
[Termes IGN] traitement de données localiséesRésumé : (auteur) Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study, the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow. Numéro de notice : C2016-041 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLI-B3-301-2016 Date de publication en ligne : 09/06/2016 En ligne : https://doi.org/10.5194/isprs-archives-XLI-B3-301-2016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91853 Documents numériques
en open access
The iQmulus urban showcase ... - pdf éditeurAdobe Acrobat PDF Permalink