Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données
analyse de donnéesSynonyme(s)analyse statistique analyse des donnéesVoir aussi |
Documents disponibles dans cette catégorie (3733)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Comparison of PBIA and GEOBIA classification methods in classifying turbidity in reservoirs / Douglas Stefanello Facco in Geocarto international, vol 37 n° 16 ([15/08/2022])
[article]
Titre : Comparison of PBIA and GEOBIA classification methods in classifying turbidity in reservoirs Type de document : Article/Communication Auteurs : Douglas Stefanello Facco, Auteur ; Laurindo Antonio Guasselli, Auteur ; Luis Fernando Chimelo Ruiz, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 4762 - 4783 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] bande spectrale
[Termes IGN] Brésil
[Termes IGN] centrale hydroélectrique
[Termes IGN] classification bayesienne
[Termes IGN] classification dirigée
[Termes IGN] classification et arbre de régression
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] image Landsat-OLI
[Termes IGN] segmentation d'image
[Termes IGN] turbidité des eauxRésumé : (auteur) Our goal is to compare the performance of Classification and Regression Tree, Naive Bayes and Random Forest algorithms, from supervised image classification, and approaches on Pixel-Based Image analysis (PBIA) and Geographic Object-Based Image Analysis (GEOBIA), to classify turbidity in reservoirs. Tod do so, we use Landsat 8 image and bands and spectral indices, as predictive parameters, as well as the classification algorithms based on PBIA and GEOBIA. The Brazilian Itaipu reservoir was adopted, as a case study. Our results show that the RF classifier obtained the highest accuracy in both classification approaches, followed by CART and NB. The KA and OA indices of the GEOBIA classifications were superior to the PBIA classifications in both algorithms. This study contributes with an approach to quickly and accurately delineating turbidity spectral limits in reservoirs. Numéro de notice : A2022-668 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2021.1899302 Date de publication en ligne : 22/06/2021 En ligne : https://doi.org/10.1080/10106049.2021.1899302 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101519
in Geocarto international > vol 37 n° 16 [15/08/2022] . - pp 4762 - 4783[article]3D building reconstruction from single street view images using deep learning / Hui En Pang in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)
[article]
Titre : 3D building reconstruction from single street view images using deep learning Type de document : Article/Communication Auteurs : Hui En Pang, Auteur ; Filip Biljecki, Auteur Année de publication : 2022 Article en page(s) : n° 102859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] empreinte
[Termes IGN] Helsinki
[Termes IGN] image Streetview
[Termes IGN] maillage
[Termes IGN] morphologie urbaine
[Termes IGN] précision géométrique (imagerie)
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (auteur) 3D building models are an established instance of geospatial information in the built environment, but their acquisition remains complex and topical. Approaches to reconstruct 3D building models often require existing building information (e.g. their footprints) and data such as point clouds, which are scarce and laborious to acquire, limiting their expansion. In parallel, street view imagery (SVI) has been gaining currency, driven by the rapid expansion in coverage and advances in computer vision (CV), but it has not been used much for generating 3D city models. Traditional approaches that can use SVI for reconstruction require multiple images, while in practice, often only few street-level images provide an unobstructed view of a building. We develop the reconstruction of 3D building models from a single street view image using image-to-mesh reconstruction techniques modified from the CV domain. We regard three scenarios: (1) standalone single-view reconstruction; (2) reconstruction aided by a top view delineating the footprint; and (3) refinement of existing 3D models, i.e. we examine the use of SVI to enhance the level of detail of block (LoD1) models, which are common. The results suggest that trained models supporting (2) and (3) are able to reconstruct the overall geometry of a building, while the first scenario may derive the approximate mass of the building, useful to infer the urban form of cities. We evaluate the results by demonstrating their usefulness for volume estimation, with mean errors of less than 10% for the last two scenarios. As SVI is now available in most countries worldwide, including many regions that do not have existing footprint and/or 3D building data, our method can derive rapidly and cost-effectively the 3D urban form from SVI without requiring any existing building information. Obtaining 3D building models in regions that hitherto did not have any, may enable a number of 3D geospatial analyses locally for the first time. Numéro de notice : A2022-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102859 Date de publication en ligne : 17/06/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101160
in International journal of applied Earth observation and geoinformation > vol 112 (August 2022) . - n° 102859[article]3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)
[article]
Titre : 3D semantic scene completion: A survey Type de document : Article/Communication Auteurs : Luis Roldão, Auteur ; Raoul de Charette, Auteur ; Anne Verroust-Blondet, Auteur Année de publication : 2022 Article en page(s) : pp 1978 - 2005 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] effet de profondeur cinétique
[Termes IGN] image RVB
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelRésumé : (auteur) Semantic scene completion (SSC) aims to jointly estimate the complete geometry and semantics of a scene, assuming partial sparse input. In the last years following the multiplication of large-scale 3D datasets, SSC has gained significant momentum in the research community because it holds unresolved challenges. Specifically, SSC lies in the ambiguous completion of large unobserved areas and the weak supervision signal of the ground truth. This led to a substantially increasing number of papers on the matter. This survey aims to identify, compare and analyze the techniques providing a critical analysis of the SSC literature on both methods and datasets. Throughout the paper, we provide an in-depth analysis of the existing works covering all choices made by the authors while highlighting the remaining avenues of research. SSC performance of the SoA on the most popular datasets is also evaluated and analyzed. Numéro de notice : A2022-593 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-021-01504-5 Date de publication en ligne : 06/06/2022 En ligne : http://dx.doi.org/10.1007/s11263-021-01504-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101296
in International journal of computer vision > vol 130 n° 8 (August 2022) . - pp 1978 - 2005[article]An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images / Kwanghun Choi in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
[article]
Titre : An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images Type de document : Article/Communication Auteurs : Kwanghun Choi, Auteur ; Wontaek LIM, Auteur ; Byungwoo Chang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 165 - 180 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] détection automatique
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] gestion forestière durable
[Termes IGN] image Streetview
[Termes IGN] inventaire de la végétation
[Termes IGN] segmentation sémantique
[Termes IGN] SéoulRésumé : (auteur) Tree species and canopy structural profile (‘tree profile’) are among the most critical environmental factors in determining urban ecosystem services such as climate and air quality control from urban trees. To accurately characterize a tree profile, the tree diameter, height, crown width, and height to the lowest live branch must be all measured, which is an expensive and time-consuming procedure. Recent advances in artificial intelligence aids to efficiently and accurately measure the aforementioned tree profile parameters. This can be particularly helpful if spatially extensive and accurate street-level images provided by Google (‘streetview’) or Kakao (‘roadview’) are utilized. We focused on street trees in Seoul, the capital city of South Korea, and suggested a novel approach to create a tree profile and inventory based on deep learning algorithms. We classified urban tree species using the YOLO (You Only Look Once), one of the most popular deep learning object detection algorithms, which provides an uncomplicated method of creating datasets with custom classes. We further utilized semantic segmentation algorithm and graphical analysis to estimate tree profile parameters by determining the relative location of the interface of tree and ground surface. We evaluated the performance of the model by comparing the estimated tree heights, diameters, and locations from the model with the field measurements as ground truth. The results are promising and demonstrate the potential of the method for creating urban street tree profile inventory. In terms of tree species classification, the method showed the mean average precision (mAP) of 0.564. When we used the ideal tree images, the method also reported the normalized root mean squared error (NRMSE) for the tree height, diameter at breast height (DBH), and distances from the camera to the trees as 0.24, 0.44, and 0.41. Numéro de notice : A2022-503 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.004 Date de publication en ligne : 22/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101001
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 165 - 180[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Estimating crop type and yield of small holder fields in Burkina Faso using multi-day Sentinel-2 / Akiko Elders in Remote Sensing Applications: Society and Environment, RSASE, Vol 27 (August 2022)PermalinkFiltering airborne LIDAR data by using fully convolutional networks / Abdullah Varlik in Survey review, vol 55 n° 388 (January 2023)PermalinkFull-waveform classification and segmentation-based signal detection of single-wavelength bathymetric LiDAR / Xue Ji in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)PermalinkGenerating impact maps from bomb craters automatically detected in aerial wartime images using marked point processes / Christian Kruse in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)PermalinkHyperspectral unmixing using transformer network / Preetam Ghosh in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)PermalinkIncorporation of digital elevation model, normalized difference vegetation index, and Landsat-8 data for land use land cover mapping / Jwan Al-Doski in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 8 (August 2022)PermalinkMapping land-use intensity of grasslands in Germany with machine learning and Sentinel-2 time series / Maximilian Lange in Remote sensing of environment, vol 277 (August 2022)PermalinkMeasuring COVID-19 vulnerability for Northeast Brazilian municipalities: Social, economic, and demographic factors based on multiple criteria and spatial analysis / Ciro José Jardim De Figueiredo in ISPRS International journal of geo-information, vol 11 n° 8 (August 2022)PermalinkA pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery / Sajid Ghuffar in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)PermalinkSTICC: a multivariate spatial clustering method for repeated geographic pattern discovery with consideration of spatial contiguity / Yuhao Kang in International journal of geographical information science IJGIS, vol 36 n° 8 (August 2022)Permalink