Descripteur
Documents disponibles dans cette catégorie (9369)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Automated detection of arbitrarily shaped buildings in complex environments from monocular VHR optical satellite imagery / Ali Ozgun Ok in IEEE Transactions on geoscience and remote sensing, vol 51 n° 3 Tome 2 (March 2013)
![]()
[article]
Titre : Automated detection of arbitrarily shaped buildings in complex environments from monocular VHR optical satellite imagery Type de document : Article/Communication Auteurs : Ali Ozgun Ok, Auteur ; Caglar Seranas, Auteur ; Baris Yuksel, Auteur Année de publication : 2013 Article en page(s) : pp 1701 - 1717 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection de régions
[Termes IGN] détection du bâti
[Termes IGN] image à très haute résolution
[Termes IGN] image Geoeye
[Termes IGN] image Quickbird
[Termes IGN] information complexe
[Termes IGN] ombre
[Termes IGN] partition des donnéesRésumé : (Auteur) This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images. First, we investigate the shadow evidence to focus on building regions. To do that, we propose a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows. Once all landscapes are collected, a pruning process is developed to eliminate the landscapes that may occur due to non-building objects. The final building regions are detected by GrabCut partitioning approach. In this paper, the input requirements of the GrabCut partitioning are automatically extracted from the previously determined shadow and landscape regions, so that the approach gained an efficient fully automated behavior for the detection of buildings. Extensive experiments performed on 20 test sites selected from a set of QuickBird and Geoeye-1 VHR images showed that the proposed approach accurately detects buildings with arbitrary shapes and sizes in complex environments. The tests also revealed that even under challenging environmental and illumination conditions, reasonable building detection performances could be achieved by the proposed approach. Numéro de notice : A2013-135 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2012.2207905 En ligne : https://doi.org/10.1109/TGRS.2012.2207905 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32273
in IEEE Transactions on geoscience and remote sensing > vol 51 n° 3 Tome 2 (March 2013) . - pp 1701 - 1717[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 065-2013031B RAB Revue Centre de documentation En réserve L003 Disponible Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb / Jeremy W. Crampton in Cartography and Geographic Information Science, vol 40 n° 2 (March 2013)
![]()
[article]
Titre : Beyond the geotag: situating ‘big data’ and leveraging the potential of the geoweb Type de document : Article/Communication Auteurs : Jeremy W. Crampton, Auteur ; Mark Graham, Auteur ; Ate Poorthuis, Auteur ; Taylor Shelton, Auteur ; Monica Stephens, Auteur ; et al., Auteur Année de publication : 2013 Article en page(s) : pp 130 - 139 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] données massives
[Termes IGN] géobalise
[Termes IGN] géoréférencement
[Termes IGN] GeoWeb
[Termes IGN] visualisation cartographiqueRésumé : (Auteur) This article presents an overview and initial results of a geoweb analysis designed to provide the foundation for a continued discussion of the potential impacts of ‘big data’ for the practice of critical human geography. While Haklay's (2012) observation that social media content is generated by a small number of ‘outliers’ is correct, we explore alternative methods and conceptual frameworks that might allow for one to overcome the limitations of previous analyses of user-generated geographic information. Though more illustrative than explanatory, the results of our analysis suggest a cautious approach toward the use of the geoweb and big data that are as mindful of their shortcomings as their potential. More specifically, we propose five extensions to the typical practice of mapping georeferenced data that we call going ‘beyond the geotag’: (1) going beyond social media that is explicitly geographic; (2) going beyond spatialities of the ‘here and now’; (3) going beyond the proximate; (4) going beyond the human to data produced by bots and automated systems, and (5) going beyond the geoweb itself, by leveraging these sources against ancillary data, such as news reports and census data. We see these extensions of existing methodologies as providing the potential for overcoming existing limitations on the analysis of the geoweb. The principal case study focuses on the widely reported riots following the University of Kentucky men's basketball team's victory in the 2012 NCAA championship and its manifestation within the geoweb. Drawing upon a database of archived Twitter activity – including all geotagged tweets since December 2011–we analyze the geography of tweets that used a specific hashtag (#LexingtonPoliceScanner) in order to demonstrate the potential application of our methodological and conceptual program. By tracking the social, spatial, and temporal diffusion of this hashtag, we show how large databases of such spatially referenced internet content can be used in a more systematic way for critical social and spatial analysis. Numéro de notice : A2013-747 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/SOCIETE NUMERIQUE Nature : Article DOI : 10.1080/15230406.2013.777137 En ligne : https://doi.org/10.1080/15230406.2013.777137 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32883
in Cartography and Geographic Information Science > vol 40 n° 2 (March 2013) . - pp 130 - 139[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 032-2013021 RAB Revue Centre de documentation En réserve L003 Disponible Comparison of forest attributes derived from two terrestrial lidar systems / Mark J. Ducey in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 3 (March 2013)
![]()
[article]
Titre : Comparison of forest attributes derived from two terrestrial lidar systems Type de document : Article/Communication Auteurs : Mark J. Ducey, Auteur ; Rasmus Astrup, Auteur ; et al., Auteur Année de publication : 2013 Article en page(s) : pp 245 - 257 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse comparative
[Termes IGN] attribut
[Termes IGN] canopée
[Termes IGN] caractérisation
[Termes IGN] Colombie-Britannique (Canada)
[Termes IGN] données lidar
[Termes IGN] forêt
[Termes IGN] instrumentation Leica
[Termes IGN] instrumentation Riegl
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] lasergrammétrie
[Termes IGN] semis de points
[Termes IGN] télémétrie laser terrestre
[Termes IGN] troncRésumé : (Auteur) Terrestrial lidar (TLS) is an emerging technology for deriving forest attributes, including conventional inventory and canopy characterizations. However, little is known about the influence of scanner specifications on derived forest parameters. We compared two TLS systems at two sites in British Columbia. Common scanning benchmarks and identical algorithms were used to obtain estimates of tree diameter, position, and canopy characteristics. Visualization of range images and point clouds showed clear differences, even though both scanners were relatively high-resolution instruments. These translated into quantifiable differences in impulse penetration, characterization of stems and crowns far from the scan location, and gap fraction. Differences between scanners in estimates of effective plant area index were greater than differences between sites. Both scanners provided a detailed digital model of forest structure, and gross structural characterizations (including crown dimensions and position) were relatively robust; but comparison of canopy density metrics may require consideration of scanner attributes. Numéro de notice : A2013-104 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.14358/PERS.79.3.245 En ligne : https://doi.org/10.14358/PERS.79.3.245 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32242
in Photogrammetric Engineering & Remote Sensing, PERS > vol 79 n° 3 (March 2013) . - pp 245 - 257[article]Correlation analysis of camera self-calibration in close range photogrammetry / Rongfu Tang in Photogrammetric record, vol 28 n° 141 (March - May 2013)
![]()
[article]
Titre : Correlation analysis of camera self-calibration in close range photogrammetry Type de document : Article/Communication Auteurs : Rongfu Tang, Auteur ; Dieter Fritsch, Auteur Année de publication : 2013 Article en page(s) : pp 86 - 95 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] auto-étalonnage
[Termes IGN] corrélation
[Termes IGN] déformation d'image
[Termes IGN] étalonnage de chambre métrique
[Termes IGN] photogrammétrie métrologiqueRésumé : (Auteur) While the Brown self-calibration model has been widely used in close range photogrammetry for over 40 years, the negative effects of the well-known high correlations between parameters are not clear. This paper presents a novel view and study of these correlations, which are shown to be inherent in the Brown model due to its polynomial nature; they occur exactly as with additional parameters in aerial photogrammetry. Although it has high correlations with the decentring distortion parameters, the principal point can be located precisely (up to a few pixels) in self-calibration with an appropriate image configuration. An alternative model of the in-plane distortion is proposed, which helps to reduce the correlation with principal distance. Numéro de notice : A2013-153 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12009 Date de publication en ligne : 27/03/2013 En ligne : https://doi.org/10.1111/phor.12009 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32291
in Photogrammetric record > vol 28 n° 141 (March - May 2013) . - pp 86 - 95[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 106-2013011 RAB Revue Centre de documentation En réserve L003 Disponible Detection and 3D reconstruction of traffic signs from multiple view color images / Bahman Soheilian in ISPRS Journal of photogrammetry and remote sensing, vol 77 (March 2013)
![]()
[article]
Titre : Detection and 3D reconstruction of traffic signs from multiple view color images Type de document : Article/Communication Auteurs : Bahman Soheilian , Auteur ; Nicolas Paparoditis
, Auteur ; Bruno Vallet
, Auteur
Année de publication : 2013 Projets : CityVIP / Paparoditis, Nicolas Article en page(s) : pp 1 - 20 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement automatique
[Termes IGN] couleur (variable spectrale)
[Termes IGN] figure géométrique
[Termes IGN] géométrie épipolaire
[Termes IGN] programmation par contraintes
[Termes IGN] Ransac (algorithme)
[Termes IGN] reconstruction 3D
[Termes IGN] segmentation d'image
[Termes IGN] signalisation routièreRésumé : (Auteur) 3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for position and 4.5° for orientation. Numéro de notice : A2013-111 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2012.11.009 Date de publication en ligne : 26/01/2013 En ligne : https://doi.org/10.1016/j.isprsjprs.2012.11.009 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32249
in ISPRS Journal of photogrammetry and remote sensing > vol 77 (March 2013) . - pp 1 - 20[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2013031 RAB Revue Centre de documentation En réserve L003 Disponible Development of a Coordinate Transformation method for direct georeferencing in map projection frames / Haitao Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 77 (March 2013)
PermalinkExtracting polygonal building footprints from digital surface models: A fully-automatic global optimization framework / Mathieu Brédif in ISPRS Journal of photogrammetry and remote sensing, vol 77 (March 2013)
PermalinkImproving Cartosat-1 DEM accuracy using synthetic stereo pair and triplet / D. Giribabu in ISPRS Journal of photogrammetry and remote sensing, vol 77 (March 2013)
PermalinkInSAR-derived coseismic deformation of the 2010 Southeastern Iran earthquake (M6.5) and its relationship with the tectonic background in the South of Lut Block / Tomokazu Kobayashi in Bulletin of the GeoSpatial Information authority of Japan, vol 60 (March 2013)
PermalinkLand cover dependant error intermap IFSAR DTM: Lidar comparison and fusion potential / S. Coveney in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 3 (March 2013)
PermalinkPhotogrammetric techniques for the determination of spatio-temporal velocity fields at glaciar San Rafael, Chile / Hans-Gerd Maas in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 3 (March 2013)
PermalinkSampling piecewise convex unmixing and endmember extraction / Alina Zare in IEEE Transactions on geoscience and remote sensing, vol 51 n° 3 Tome 2 (March 2013)
PermalinkTopological gradient connection analysis for feature detection / Chao-Yuan Lo in Photogrammetric record, vol 28 n° 141 (March - May 2013)
PermalinkVirtual worlds for photogrammetric image-based simulation and learning / Eduardo J. Piatti in Photogrammetric record, vol 28 n° 141 (March - May 2013)
PermalinkXXIInd [22nd] international congress of photogrammetry and remote sensing / Paul R. Newby in Photogrammetric record, vol 28 n° 141 (March - May 2013)
PermalinkAutomatic orientation and 3D modelling from markerless rock art imagery / J. Lerma in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
PermalinkCombining terrestrial stereophotogrammetry, DGPS and GIS-based 3D voxel modelling in the volumetric recording of archaeological features / H. Orengo in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
PermalinkDual-Polarimetric signatures of vegetation – a case study Biebrza / Dariusz Ziolkowski in Geoinformation issues, vol 5 n° 1 (2013)
PermalinkFilter design for GOCE gravity gradients / Zs. Polgár Polgár in Geocarto international, vol 28 n° 1-2 (February - May 2013)
PermalinkFrom LiDAR data to forest representation on multi-scale / Freiderike Schwarzbach in Cartographic journal (the), vol 50 n° 1 (February 2013)
PermalinkA graph-based classification method for hyperspectral images / J. Bai in IEEE Transactions on geoscience and remote sensing, vol 51 n° 2 (February 2013)
PermalinkGround filtering and vegetation mapping using multi-return terrestrial laser scanning / Francesco Pirotti in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
PermalinkPermalinkModel driven reconstruction of roofs from sparse LIDAR point clouds / A. Henn in ISPRS Journal of photogrammetry and remote sensing, vol 76 (February 2013)
PermalinkA multi-scale approach to mapping canopy height / Gordon M. Green in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 2 (February 2013)
Permalink