ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 178Paru le : 01/08/2021 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2021081 | SL | Revue | Centre de documentation | Revues en salle | Disponible |
081-2021083 | DEP-RECP | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2021082 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierRapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning / Xin Jiang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning Type de document : Article/Communication Auteurs : Xin Jiang, Auteur ; Shijing Liang, Auteur ; Xinyue He, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 36 - 50 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] cartographie des risques
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] Fleuve bleu (Chine)
[Termes IGN] Google Earth Engine
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] inondation
[Termes IGN] modèle numérique de surface
[Termes IGN] segmentation d'image
[Termes IGN] superpixel
[Termes IGN] surveillance hydrologiqueRésumé : (auteur) Synthetic aperture radar (SAR) has great potential for timely monitoring of flood information as it penetrates the clouds during flood events. Moreover, the proliferation of SAR satellites with high spatial and temporal resolution provides a tremendous opportunity to understand the flood risk and its quick response. However, traditional algorithms to extract flood inundation using SAR often require manual parameter tuning or data annotation, which presents a challenge for the rapid automated mapping of large and complex flooded scenarios. To address this issue, we proposed a segmentation algorithm for automatic flood mapping in near-real-time over vast areas and for all-weather conditions by integrating Sentinel-1 SAR imagery with an unsupervised machine learning approach named Felz-CNN. The algorithm consists of three phases: (i) super-pixel generation; (ii) convolutional neural network-based featurization; (iii) super-pixel aggregation. We evaluated the Felz-CNN algorithm by mapping flood inundation during the Yangtze River flood in 2020, covering a total study area of 1,140,300 km2. When validated on fine-resolution Planet satellite imagery, the algorithm accurately identified flood extent with producer and user accuracy of 93% and 94%, respectively. The results are indicative of the usefulness of our unsupervised approach for the application of flood mapping. Meanwhile, we overlapped the post-disaster inundation map with a 10-m resolution global land cover map (FROM-GLC10) to assess the damages to different land cover types. Of these types, cropland and residential settlements were most severely affected, with inundation areas of 9,430.36 km2 and 1,397.50 km2, respectively, results that are in agreement with statistics from relevant agencies. Compared with traditional supervised classification algorithms that require time-consuming data annotation, our unsupervised algorithm can be deployed directly to high-performance computing platforms such as Google Earth Engine and PIE-Engine to generate a large-spatial map of flood-affected areas within minutes, without time-consuming data downloading and processing. Importantly, this efficiency enables the fast and effective monitoring of flood conditions to aid in disaster governance and mitigation globally. Numéro de notice : A2021-560 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.05.019 Date de publication en ligne : 09/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.05.019 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98118
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 36 - 50[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Structure-aware indoor scene reconstruction via two levels of abstraction / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Structure-aware indoor scene reconstruction via two levels of abstraction Type de document : Article/Communication Auteurs : Hao Fang, Auteur ; Cihui Pan, Auteur ; Hui Huang, Auteur Année de publication : 2021 Article en page(s) : pp 155 - 170 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image optique
[Termes IGN] maillage
[Termes IGN] maille triangulaire
[Termes IGN] niveau d'abstraction
[Termes IGN] polygone
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) In this paper, we propose a novel approach that reconstructs the indoor scene in a structure-aware manner and produces two meshes with different levels of abstraction. To be precise, we start from the raw triangular mesh of indoor scene and decompose it into two parts: structure and non-structure objects. On the one hand, structure objects are defined as significant permanent parts in the indoor environment such as floors, ceilings and walls. In the proposed algorithm, structure objects are abstracted by planar primitives and assembled into a polygonal structure mesh. This step produces a compact structure-aware watertight model that decreases the complexity of original mesh by three orders of magnitude. On the other hand, non-structure objects are movable objects in the indoor environment such as furniture and interior decoration. Meshes of these objects are repaired and simplified according to their relationship with respect to structure primitives. Finally, the union of all the non-structure meshes and structure mesh comprises the scene mesh. Note that structure mesh and scene mesh preserve various levels of abstraction and can be used for different applications according to user preference. Our experiments on both LIDAR and RGBD data scanned from simple to large scale indoor scenes indicate that the proposed framework generates structure-aware results while being robust and scalable. It is also compared qualitatively and quantitatively against popular mesh approximation, floorplan generation and piecewise-planar surface reconstruction methods to demonstrate its performance. Numéro de notice : A2021-561 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.007 Date de publication en ligne : 23/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98119
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 155 - 170[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Mathematically optimized trajectory for terrestrial close-range photogrammetric 3D reconstruction of forest stands / Karel Kuželka in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Mathematically optimized trajectory for terrestrial close-range photogrammetric 3D reconstruction of forest stands Type de document : Article/Communication Auteurs : Karel Kuželka, Auteur ; Peter Surový, Auteur Année de publication : 2021 Article en page(s) : pp 259 - 281 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie terrestre
[Termes IGN] détection automatique
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] optimisation (mathématiques)
[Termes IGN] peuplement forestier
[Termes IGN] problème du voyageur de commerce
[Termes IGN] reconstruction 3D
[Termes IGN] semis de points
[Termes IGN] séquence d'images
[Termes IGN] structure-from-motion
[Termes IGN] trajectoire (véhicule non spatial)Résumé : (auteur) Terrestrial close-range photogrammetry offers a low-cost method of three-dimensional (3D) reconstruction of forest stands that provides automatically processable 3D data that can be used to evaluate inventory parameters of forest stands and individual trees. However, fundamental methodological problems in image acquisition and processing remain. This study enhances the methodology of photogrammetric Structure from Motion reconstruction of forest stands by determining the best photographer's trajectory for image acquisition. The study comprises 1) mathematical optimization of the route in a square grid using integer programming, 2) evaluation of point clouds derived from sequences of real photographs, simulating different trajectories, and 3) verification on real trajectories. In a forest research plot, we established a 1 m square grid of 625 (i.e., 25 × 25) photographic positions, and at each position, we captured 16 photographs in uniformly spaced directions. We adopted real tree positions and diameters, and the coordinates of the photographic positions, including orientation angles of captured images, were recorded. We then formulated an integer programming optimization model to find the most efficient trajectory that provided coverage of all sides of all trees with sufficient counts of images. Subsequently, we used the 10,000 captured images to produce image subsets simulating image sequences acquired during the photographer's movement along 84 different systematic trajectories of seven patterns based on either parallel lines or concentric orbits. 3D point clouds derived from the simulated image sequences were evaluated for their suitability for automatic tree detection and estimation of diameters at breast height. The results of the integer programming model indicated that the optimal trajectory consisted of parallel line segments if the camera is pointed forward – in the travel direction, or concentric orbits if the camera is pointed to a side – perpendicular to the travel direction. With point clouds derived from the images of the simulated trajectories, the best diameter estimates on automatically detected trees were achieved with trajectories consisting of parallel lines in two perpendicular directions where each line was passed in both opposite directions. For efficient image acquisition, resulting in point clouds of reasonable quality with low counts of images, a trajectory consisting of concentric orbits, including the plot perimeter with the camera pointed towards the plot center, proved to be the best. Results of simulated trajectories were verified with the photogrammetric reconstruction of the forest stand based on real trajectories for six patterns. The mathematical optimization was consistent with the results of the experiment, which indicated that mathematical optimization may represent a valid tool for planning trajectories for photogrammetric 3D reconstruction of scenes in general. Numéro de notice : A2021-562 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.013 Date de publication en ligne : 02/07/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.013 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98122
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 259 - 281[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN) / Zhenbang Hao in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN) Type de document : Article/Communication Auteurs : Zhenbang Hao, Auteur ; Lili Lin, Auteur ; Christopher J. Post, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 112 - 123 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Abies (genre)
[Termes IGN] Abies numidica
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] hauteur des arbres
[Termes IGN] houppier
[Termes IGN] image captée par drone
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] plantation forestièreRésumé : (auteur) Tree-crown and height are primary tree measurements in forest inventory. Convolutional neural networks (CNNs) are a class of neural networks, which can be used in forest inventory; however, no prior studies have developed a CNN model to detect tree crown and height simultaneously. This study is the first-of-its-kind that explored training a mask region-based convolutional neural network (Mask R-CNN) for automatically and concurrently detecting discontinuous tree crown and height of Chinese fir (Cunninghamia lanceolata (Lamb) Hook) in a plantation. A DJI Phantom4-Multispectral Unmanned Aerial Vehicle (UAV) was used to obtain high-resolution images of the study site, Shunchang County, China. Tree crown and height of Chinese fir was manually delineated and derived from this UAV imagery. A portion of the ground-truthed tree height values were used as a test set, and the remaining measurements were used as the model training data. Six different band combinations and derivations of the UAV imagery were used to detect tree crown and height, respectively (Multi band-DSM, RGB-DSM, NDVI-DSM, Multi band-CHM, RGB-CHM, and NDVI-CHM combination). The Mask R-CNN model with the NDVI-CHM combination achieved superior performance. The accuracy of Chinese fir’s individual tree-crown detection was considerable (F1 score = 84.68%), the Intersection over Union (IoU) of tree crown delineation was 91.27%, and tree height estimates were highly correlated with the height from UAV imagery (R2 = 0.97, RMSE = 0.11 m, rRMSE = 4.35%) and field measurement (R2 = 0.87, RMSE = 0.24 m, rRMSE = 9.67%). Results demonstrate that the input image with an CHM achieves higher accuracy of tree crown delineation and tree height assessment compared to an image with a DSM. The accuracy and efficiency of Mask R-CNN has a great potential to assist the application of remote sensing in forests. Numéro de notice : A2021-563 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2021.06.003 Date de publication en ligne : 18/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98128
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 112 - 123[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Mapping essential urban land use categories with open big data: Results for five metropolitan areas in the United States of America / Bin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Mapping essential urban land use categories with open big data: Results for five metropolitan areas in the United States of America Type de document : Article/Communication Auteurs : Bin Chen, Auteur ; Ying Tu, Auteur ; Yimeng Song, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 203 - 218 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] algorithme d'apprentissage
[Termes IGN] carte d'utilisation du sol
[Termes IGN] données massives
[Termes IGN] données multisources
[Termes IGN] Etats-Unis
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] métropole
[Termes IGN] OpenStreetMap
[Termes IGN] planification urbaine
[Termes IGN] zone urbaineRésumé : (auteur) Urban land-use maps outlining the distribution, pattern, and composition of various land use types are critically important for urban planning, environmental management, disaster control, health protection, and biodiversity conservation. Recent advances in remote sensing and social sensing data and methods have shown great potentials in mapping urban land use categories, but they are still constrained by mixed land uses, limited predictors, non-localized models, and often relatively low accuracies. To inform these issues, we proposed a robust and cost-effective framework for mapping urban land use categories using openly available multi-source geospatial “big data”. With street blocks generated from OpenStreetMap (OSM) data as the minimum classification unit, we integrated an expansive set of multi-scale spatially explicit information on land surface, vertical height, socio-economic attributes, social media, demography, and topography. We further proposed to apply the automatic ensemble learning that leverages a bunch of machine learning algorithms in deriving optimal urban land use classification maps. Results of block-level urban land use classification in five metropolitan areas of the United States found the overall accuracies of major-class (Level-I) and minor-class (Level-II) classification could be high as 91% and 86%, respectively. A multi-model comparison revealed that for urban land use classification with high-dimensional features, the multi-layer stacking ensemble models achieved better performance than base models such as random forest, extremely randomized trees, LightGBM, CatBoost, and neural networks. We found without very-high-resolution National Agriculture Imagery Program imagery, the classification results derived from Sentinel-1, Sentinel-2, and other open big data based features could achieve plausible overall accuracies of Level-I and Level-II classification at 88% and 81%, respectively. We also found that model transferability depended highly on the heterogeneity in characteristics of different regions. The methods and findings in this study systematically elucidate the role of data sources, classification methods, and feature transferability in block-level land use classifications, which have important implications for mapping multi-scale essential urban land use categories. Numéro de notice : A2021-564 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.010 Date de publication en ligne : 25/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98129
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 203 - 218[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt