Descripteur
Documents disponibles dans cette catégorie (8408)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Alternative procedure to improve the positioning accuracy of orthomosaic images acquired with Agisoft Metashape and DJI P4 multispectral for crop growth observation / Toshihiro Sakamoto in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 5 (May 2022)
[article]
Titre : Alternative procedure to improve the positioning accuracy of orthomosaic images acquired with Agisoft Metashape and DJI P4 multispectral for crop growth observation Type de document : Article/Communication Auteurs : Toshihiro Sakamoto, Auteur ; Daisuke Ogawa, Auteur ; Satoko Hiura, Auteur ; Nobusuke Iwasaki, Auteur Année de publication : 2022 Article en page(s) : pp 323 - 332 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] bande spectrale
[Termes IGN] blé (céréale)
[Termes IGN] chlorophylle
[Termes IGN] image à haute résolution
[Termes IGN] image captée par drone
[Termes IGN] indice de végétation
[Termes IGN] orthophotoplan numérique
[Termes IGN] point d'appui
[Termes IGN] précision du positionnement
[Termes IGN] rizière
[Termes IGN] structure-from-motionRésumé : (Auteur) Vegetation indices (VIs), such as the green chlorophyll index and normalized difference vegetation index, are calculated from visible and near-infrared band images for plant diagnosis in crop breeding and field management. The DJI P4 Multispectral drone combined with the Agisoft Metashape Structure from Motion/Multi View Stereo software is some of the most cost-effective equipment for creating high-resolution orthomosaic VI images. However, the manufacturer's procedure results in remarkable location estimation inaccuracy (average error: 3.27–3.45 cm) and alignment errors between spectral bands (average error: 2.80–2.84 cm). We developed alternative processing procedures to overcome these issues, and we achieved a higher positioning accuracy (average error: 1.32–1.38 cm) and better alignment accuracy between spectral bands (average error: 0.26–0.32 cm). The proposed procedure enables precise VI analysis, especially when using the green chlorophyll index for corn, and may help accelerate the application of remote sensing techniques to agriculture. Numéro de notice : A2022-528 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00064R2 Date de publication en ligne : 01/05/2022 En ligne : https://doi.org/10.14358/PERS.21-00064R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101379
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 5 (May 2022) . - pp 323 - 332[article]Réservation
Réserver ce documentExemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 105-2022052 SL Revue Centre de documentation Revues en salle Disponible 105-2022051 SL Revue Centre de documentation Revues en salle Disponible An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps / Paweł Cybulski in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)
[article]
Titre : An empirical study on the effects of temporal trends in spatial patterns on animated choropleth maps Type de document : Article/Communication Auteurs : Paweł Cybulski, Auteur Année de publication : 2022 Article en page(s) : n° 273 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse de groupement
[Termes IGN] analyse visuelle
[Termes IGN] carte choroplèthe
[Termes IGN] cartographie animée
[Termes IGN] lecture de carte
[Termes IGN] oculométrie
[Termes IGN] reconnaissance de formes
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] CartologieRésumé : (auteur) Animated cartographic visualization incorporates the concept of geomedia presented in this Special Issue. The presented study aims to examine the effectiveness of spatial pattern and temporal trend recognition on animated choropleth maps. In a controlled laboratory experiment with participants and eye tracking, fifteen animated maps were used to show a different spatial patterns and temporal trends. The participants’ task was to correctly detect the patterns and trends on a choropleth map. The study results show that effective spatial pattern and temporal trend recognition on a choropleth map is related to participants’ visual behavior. Visual attention clustered in the central part of the choropleth map supports effective spatio-temporal relationship recognition. The larger the area covered by the fixation cluster, the higher the probability of correct temporal trend and spatial pattern recognition. However, animated choropleth maps are more suitable for presenting temporal trends than spatial patterns. Understanding the difficulty in the correct recognition of spatio-temporal relationships might be a reason for implementing techniques that support effective visual searches such as highlighting, cartographic redundancy, or interactive tools. For end-users, the presented study reveals the necessity of the application of a specific visual strategy. Focusing on the central part of the map is the most effective strategy for the recognition of spatio-temporal relationships. Numéro de notice : A2022-358 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11050273 Date de publication en ligne : 20/04/2022 En ligne : https://doi.org/10.3390/ijgi11050273 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100571
in ISPRS International journal of geo-information > vol 11 n° 5 (May 2022) . - n° 273[article]City3D: Large-scale building reconstruction from airborne LiDAR point clouds / Jin Huang in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : City3D: Large-scale building reconstruction from airborne LiDAR point clouds Type de document : Article/Communication Auteurs : Jin Huang, Auteur ; Jantien E. Stoter, Auteur ; Ravi Peters, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2254 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] empreinte
[Termes IGN] mur
[Termes IGN] polygonale
[Termes IGN] primitive géométrique
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de points
[Termes IGN] toit
[Termes IGN] Triangular Regular Network
[Termes IGN] triangulation de DelaunayRésumé : (auteur) We present a fully automatic approach for reconstructing compact 3D building models from large-scale airborne point clouds. A major challenge of urban reconstruction from airborne LiDAR point clouds lies in that the vertical walls are typically missing. Based on the observation that urban buildings typically consist of planar roofs connected with vertical walls to the ground, we propose an approach to infer the vertical walls directly from the data. With the planar segments of both roofs and walls, we hypothesize the faces of the building surface, and the final model is obtained by using an extended hypothesis-and-selection-based polygonal surface reconstruction framework. Specifically, we introduce a new energy term to encourage roof preferences and two additional hard constraints into the optimization step to ensure correct topology and enhance detail recovery. Experiments on various large-scale airborne LiDAR point clouds have demonstrated that the method is superior to the state-of-the-art methods in terms of reconstruction accuracy and robustness. In addition, we have generated a new dataset with our method consisting of the point clouds and 3D models of 20k real-world buildings. We believe this dataset can stimulate research in urban reconstruction from airborne LiDAR point clouds and the use of 3D city models in urban applications. Numéro de notice : A2022-387 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article DOI : 10.3390/rs14092254 Date de publication en ligne : 07/05/2022 En ligne : https://doi.org/10.3390/rs14092254 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100667
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2254[article]A context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : A context feature enhancement network for building extraction from high-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Jinzhi Chen, Auteur ; Dejun Zhang, Auteur ; Yiqi Wu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2276 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] structure-from-motionRésumé : (auteur) The complexity and diversity of buildings make it challenging to extract low-level and high-level features with strong feature representation by using deep neural networks in building extraction tasks. Meanwhile, deep neural network-based methods have many network parameters, which take up a lot of memory and time in training and testing. We propose a novel fully convolutional neural network called the Context Feature Enhancement Network (CFENet) to address these issues. CFENet comprises three modules: the spatial fusion module, the focus enhancement module, and the feature decoder module. First, the spatial fusion module aggregates the spatial information of low-level features to obtain buildings’ outline and edge information. Secondly, the focus enhancement module fully aggregates the semantic information of high-level features to filter the information of building-related attribute categories. Finally, the feature decoder module decodes the output of the above two modules to segment the buildings more accurately. In a series of experiments on the WHU Building Dataset and the Massachusetts Building Dataset, our CFENet balances efficiency and accuracy compared to the other four methods we compared, and achieves optimality on all five evaluation metrics: PA, PC, F1, IoU, and FWIoU. This indicates that CFENet can effectively enhance and fuse buildings’ low-level and high-level features, improving building extraction accuracy. Numéro de notice : A2022-385 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14092276 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.3390/rs14092276 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100663
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2276[article]A continuous change tracker model for remote sensing time series reconstruction / Yangjian Zhang in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : A continuous change tracker model for remote sensing time series reconstruction Type de document : Article/Communication Auteurs : Yangjian Zhang, Auteur ; Li Wang, Auteur ; Yuanhuizi He, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2280 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de filtrage
[Termes IGN] analyse harmonique
[Termes IGN] compression d'image
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] Leaf Area Index
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] phénologie
[Termes IGN] production primaire brute
[Termes IGN] reconstruction d'image
[Termes IGN] réflectance de surface
[Termes IGN] série temporelleRésumé : (auteur) It is hard for current time series reconstruction methods to achieve the balance of high-precision time series reconstruction and explanation of the model mechanism. The goal of this paper is to improve the reconstruction accuracy with a well-explained time series model. Thus, we developed a function-based model, the CCTM (Continuous Change Tracker Model) model, that can achieve high precision in time series reconstruction by tracking the time series variation rate. The goal of this paper is to provide a new solution for high-precision time series reconstruction and related applications. To test the reconstruction effects, the model was applied to four types of datasets: normalized difference vegetation index (NDVI), gross primary productivity (GPP), leaf area index (LAI), and MODIS surface reflectance (MSR). Several new observations are as follows. First, the CCTM model is well explained and based on the second-order derivative theorem, which divides the yearly time series into four variation types including uniform variations, decelerated variations, accelerated variations, and short-periodical variations, and each variation type is represented by a designed function. Second, the CCTM model provides much better reconstruction results than the Harmonic model on the NDVI, GPP, MSR, and LAI datasets for the seasonal segment reconstruction. The combined use of the Savitzky–Golay filter and the CCTM model is better than the combinations of the Savitzky–Golay filter with other models. Third, the Harmonic model has the best trend-fitting ability on the yearly time series dataset, with the highest R-Square and the lowest RMSE among the four function fitting models. However, with seasonal piecewise fitting, the four models all achieved high accuracy, and the CCTM performs the best. Fourth, the CCTM model should also be applied to time series image compression, two compression patterns with 24 coefficients and 6 coefficients respectively are proposed. The daily MSR dataset can achieve a compression ratio of 15 by using the 6-coefficients method. Finally, the CCTM model also has the potential to be applied to change detection, trend analysis, and phenology and seasonal characteristics extractions. Numéro de notice : A2022-384 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14092280 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.3390/rs14092280 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100662
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2280[article]A cost-effective algorithm for calibrating multiscale geographically weighted regression models / Bo Wu in International journal of geographical information science IJGIS, vol 36 n° 5 (May 2022)PermalinkEfficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkFramework for automatic coral reef extraction using Sentinel-2 image time series / Qizhi Zhang in Marine geodesy, vol 45 n° 3 (May 2022)PermalinkFusion of optical, radar and waveform LiDAR observations for land cover classification / Huiran Jin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)PermalinkHuman cognition based framework for detecting roads from remote sensing images / Naveen Chandra in Geocarto international, vol 37 n° 8 ([01/05/2022])PermalinkImpacts of spatiotemporal resolution and tiling on SLEUTH model calibration and forecasting for urban areas with unregulated growth patterns / Damilola Eyelade in International journal of geographical information science IJGIS, vol 36 n° 5 (May 2022)PermalinkMulti-modal temporal attention models for crop mapping from satellite time series / Vivien Sainte Fare Garnot in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)PermalinkPlastic waste cleanup priorities to reduce marine pollution: A spatiotemporal analysis for Accra and Lagos with satellite data / Susmita Dasgupta in Science of the total environment, vol 839 (May 2022)PermalinkRevising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)PermalinkThe role of blue green infrastructure in the urban thermal environment across seasons and local climate zones in East Africa / Xueqin Li in Sustainable Cities and Society, vol 80 (May 2022)PermalinkUnmixing-based spatiotemporal image fusion accounting for complex land cover changes / Xiaolu Jiang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkUnsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)PermalinkUnveiling the complex canopy spatial structure of a Mediterranean old-growth beech (Fagus sylvatica L.) forest from UAV observations / Francesco Solano in Ecological indicators, vol 138 (May 2022)PermalinkAutomated inventory of broadleaf tree plantations with UAS imagery / Aishwarya Chandrasekaran in Remote sensing, vol 14 n° 8 (April-2 2022)PermalinkAssessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data / Cheng-Chun Lee in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkAssessment of RTK quadcopter and structure-from-motion photogrammetry for fine-scale monitoring of coastal topographic complexity / Stéphane Bertin in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkCharacterizing stream morphological features important for fish habitat using airborne laser scanning data / Spencer Dakin Kuiper in Remote sensing of environment, vol 272 (April 2022)PermalinkCoastal observation of sea surface tide and wave height using opportunity signal from Beidou GEO satellites: analysis and evaluation / Feng Wang in Journal of geodesy, vol 96 n° 4 (April 2022)PermalinkDeep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkDeep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkDetecting land use and land cover change on Barbuda before and after the Hurricane Irma with respect to potential land grabbing: A combined volunteered geographic information and multi sensor approach / Andreas Rienow in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkDirect photogrammetry with multispectral imagery for UAV-based snow depth estimation / Kathrin Maier in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkExploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkGraph learning based on signal smoothness representation for homogeneous and heterogeneous change detection / David Alejandro Jimenez-Sierra in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkHigh-performance adaptive texture streaming and rendering of large 3D cities / Alex Zhang in The Visual Computer, vol 38 n° 4 (April 2022)PermalinkHybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy / Norbert Haala in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkMining crowdsourced trajectory and geo-tagged data for spatial-semantic road map construction / Jincai Huang in Transactions in GIS, vol 26 n° 2 (April 2022)PermalinkParcel-based summer maize mapping and phenology estimation combined using Sentinel-2 and time series Sentinel-1 data / Yanyan Wang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkPolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkResearch on machine intelligent perception of urban geographic location based on high resolution remote sensing images / Jun Chen in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 4 (April 2022)PermalinkSimulating future LUCC by coupling climate change and human effects based on multi-phase remote sensing data / Zihao Huang in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkSpecies level classification of Mediterranean sparse forests-maquis formations using Sentinel-2 imagery / Semiha Demirbaş Çağlayana in Geocarto international, vol 37 n° 6 ([01/04/2022])PermalinkThe integration of multi-source remotely sensed data with hierarchically based classification approaches in support of the classification of wetlands / Aaron Judah in Canadian journal of remote sensing, vol 48 n° 2 (April 2022)PermalinkUncertainty estimation for stereo matching based on evidential deep learning / Chen Wang in Pattern recognition, vol 124 (April 2022)PermalinkUrban land cover/use mapping and change detection analysis using multi-temporal Landsat OLI with Lidar-DEM and derived TPI / Clement E. Akumu in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 4 (April 2022)PermalinkTwo-phase forest inventory using very-high-resolution laser scanning / Henrik J. Persson in Remote sensing of environment, vol 271 (March- 2 2022)PermalinkA l'aide ! Je me suis perdu en zoomant / Guillaume Touya in Cartes & Géomatique, n° 247-248 (mars-juin 2022)PermalinkAn approach to extracting digital elevation model for undulating and hilly terrain using de-noised stereo images of Cartosat-1 sensor / Litesh Bopche in Applied geomatics, vol 14 n° 1 (March 2022)PermalinkAutomated 3D reconstruction of LoD2 and LoD1 models for All 10 million buildings of the Netherlands / Ravi Peters in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 3 (March 2022)PermalinkAutomatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles / Leilei Zhang in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)PermalinkChallenges related to the determination of altitudes of mountain peaks presented on cartographic sources / Katarzyna Chwedczuk in Geodetski vestnik, vol 66 n° 1 (March 2022)PermalinkComparaison des images satellite et aériennes dans le domaine de la détection d’obstacles à la navigation aérienne et de leur mise à jour / Olivier de Joinville in XYZ, n° 170 (mars 2022)Permalink