Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet
analyse d'image orientée objetVoir aussi |
Documents disponibles dans cette catégorie (86)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery / Maryam Hosseini in Computers, Environment and Urban Systems, vol 101 (April 2023)
[article]
Titre : Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery Type de document : Article/Communication Auteurs : Maryam Hosseini, Auteur ; Andres Sevtsuk, Auteur ; Fabio Miranda, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 101950 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection d'objet
[Termes IGN] Etats-Unis
[Termes IGN] image aérienne
[Termes IGN] navigation pédestre
[Termes IGN] segmentation sémantique
[Termes IGN] système d'information géographique
[Termes IGN] trottoir
[Termes IGN] vision par ordinateurRésumé : (auteur) While cities around the world are increasingly promoting streets and public spaces that prioritize pedestrians over vehicles, significant data gaps have made pedestrian mapping, analysis, and modeling challenging to carry out. Most cities, even in industrialized economies, still lack information about the location and connectivity of their sidewalks, making it difficult to implement research on pedestrian infrastructure and holding the technology industry back from developing accurate, location-based Apps for pedestrians, wheelchair users, street vendors, and other sidewalk users. To address this gap, we have designed and implemented an end-to-end open-source tool— Tile2Net —for extracting sidewalk, crosswalk, and footpath polygons from orthorectified aerial imagery using semantic segmentation. The segmentation model, trained on aerial imagery from Cambridge, MA, Washington DC, and New York City, offers the first open-source scene classification model for pedestrian infrastructure from sub-meter resolution aerial tiles, which can be used to generate planimetric sidewalk data in North American cities. Tile2Net also generates pedestrian networks from the resulting polygons, which can be used to prepare datasets for pedestrian routing applications. The work offers a low-cost and scalable data collection methodology for systematically generating sidewalk network datasets, where orthorectified aerial imagery is available, contributing to over-due efforts to equalize data opportunities for pedestrians, particularly in cities that lack the resources necessary to collect such data using more conventional methods. Numéro de notice : A2023-187 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.compenvurbsys.2023.101950 Date de publication en ligne : 22/02/2023 En ligne : https://doi.org/10.1016/j.compenvurbsys.2023.101950 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102961
in Computers, Environment and Urban Systems > vol 101 (April 2023) . - n° 101950[article]Point cloud data processing optimization in spectral and spatial dimensions based on multispectral Lidar for urban single-wood extraction / Shuo Shi in ISPRS International journal of geo-information, vol 12 n° 3 (March 2023)
[article]
Titre : Point cloud data processing optimization in spectral and spatial dimensions based on multispectral Lidar for urban single-wood extraction Type de document : Article/Communication Auteurs : Shuo Shi, Auteur ; Xingtao Tang, Auteur ; Bowen Chen, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 90 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse spectrale
[Termes IGN] arbre urbain
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Houston (Texas)
[Termes IGN] interpolation
[Termes IGN] réflectance spectrale
[Termes IGN] segmentation
[Termes IGN] semis de pointsRésumé : (auteur) Lidar can effectively obtain three-dimensional information on ground objects. In recent years, lidar has developed rapidly from single-wavelength to multispectral hyperspectral imaging. The multispectral airborne lidar Optech Titan is the first commercial system that can collect point cloud data on 1550, 1064, and 532 nm channels. This study proposes a method of point cloud segmentation in the preprocessed intensity interpolation process to solve the problem of inaccurate intensity at the boundary during point cloud interpolation. The entire experiment consists of three steps. First, a multispectral lidar point cloud is obtained using point cloud segmentation and intensity interpolation; the spatial dimension advantage of the multispectral point cloud is used to improve the accuracy of spectral information interpolation. Second, point clouds are divided into eight categories by constructing geometric information, spectral reflectance information, and spectral characteristics. Accuracy evaluation and contribution analysis are also conducted through point cloud truth value and classification results. Lastly, the spatial dimension information is enhanced by point cloud drop sampling, the method is used to solve the error caused by airborne scanning and single-tree extraction of urban trees. Classification results showed that point cloud segmentation before intensity interpolation can effectively improve the interpolation and classification accuracies. The total classification accuracy of the data is improved by 3.7%. Compared with the extraction result (377) of single wood without subsampling treatment, the result of the urban tree extraction proved the effectiveness of the proposed method with a subsampling algorithm in improving the accuracy. Accordingly, the problem of over-segmentation is solved, and the final single-wood extraction result (329) is markedly consistent with the real situation of the region. Numéro de notice : A2023-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi12030090 Date de publication en ligne : 23/02/2023 En ligne : https://doi.org/10.3390/ijgi12030090 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102852
in ISPRS International journal of geo-information > vol 12 n° 3 (March 2023) . - n° 90[article]
Titre : Artificial intelligence oceanography Type de document : Monographie Auteurs : Xiaofeng Li, Éditeur scientifique ; Fan Wang, Éditeur scientifique Editeur : Springer Nature Année de publication : 2023 Importance : 346 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-981-19637-5-9 Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] algue
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] cyclone
[Termes IGN] détection d'objet
[Termes IGN] iceberg
[Termes IGN] intelligence artificielle
[Termes IGN] océanographie
[Termes IGN] température de surface de la merRésumé : (éditeur) This open access book invites readers to learn how to develop artificial intelligence (AI)-based algorithms to perform their research in oceanography. Various examples are exhibited to guide details of how to feed the big ocean data into the AI models to analyze and achieve optimized results. The number of scholars engaged in AI oceanography research will increase exponentially in the next decade. Therefore, this book will serve as a benchmark providing insights for scholars and graduate students interested in oceanography, computer science, and remote sensing. Note de contenu : 1- Artificial Intelligence Foundation of smart ocean
2- Forecasting tropical instability waves based on artificial intelligence
3- Sea surface height anomaly prediction based on artificial intelligence
4- Satellite data-driven internal solitary wave forecast based on machine learning techniques
5- AI-based subsurface thermohaline structure retrieval from remote sensing observations
6- Ocean heat content retrieval from remote sensing data based on machine learning
7- Detecting tropical cyclogenesis using broad learning system from satellite passive microwave observations
8- Tropical cyclone monitoring based on geostationary satellite imagery
9- Reconstruction of pCO2 data in the Southern ocean based on feedforward neural network
10- Detection and analysis of mesoscale eddies based on deep learning
11- Deep convolutional neural networks-based coastal inundation mapping from SAR imagery: with one application case for Bangladesh, a UN-defined least developed country
12- Sea ice detection from SAR images based on deep fully convolutional networks
13- Detection and analysis of marine green algae based on artificial intelligence
14- Automatic waterline extraction of large-scale tidal flats from SAR images based on deep convolutional neural networks
15- Extracting ship’s size from SAR images by deep learning
16- Benthic organism detection, quantification and seamount biology detection based on deep learningNuméro de notice : 24105 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Monographie DOI : 10.1007/978-981-19-6375-9 En ligne : https://link.springer.com/book/10.1007/978-981-19-6375-9 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103058 Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis / Haifa Tamiminia in Geocarto international, vol 38 n° inconnu ([01/01/2023])
[article]
Titre : Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis Type de document : Article/Communication Auteurs : Haifa Tamiminia, Auteur ; Bahram Salehi, Auteur ; Masoud Mahdianpari, Auteur ; et al., Auteur Année de publication : 2023 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse d'image orientée objet
[Termes IGN] biomasse aérienne
[Termes IGN] boosting adapté
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification pixellaire
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Extreme Gradient Machine
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] New York (Etats-Unis ; état)
[Termes IGN] réserve naturelleRésumé : (auteur) Forest above-ground biomass (AGB) estimation provides valuable information about the carbon cycle. Thus, the overall goal of this paper is to present an approach to enhance the accuracy of the AGB estimation. The main objectives are to: 1) investigate the performance of remote sensing data sources, including airborne light detection and ranging (LiDAR), optical, SAR, and their combination to improve the AGB predictions, 2) examine the capability of tree-based machine learning models, and 3) compare the performance of pixel-based and object-based image analysis (OBIA). To investigate the performance of machine learning models, multiple tree-based algorithms were fitted to predictors derived from airborne LiDAR data, Landsat, Sentinel-2, Sentinel-1, and PALSAR-2/PALSAR SAR data collected within New York’s Adirondack Park. Combining remote sensing data from multiple sources improved the model accuracy (RMSE: 52.14 Mg ha−1 and R2: 0.49). There was no significant difference among gradient boosting machine (GBM), random forest (RF), and extreme gradient boosting (XGBoost) models. In addition, pixel-based and object-based models were compared using the airborne LiDAR-derived AGB raster as a training/testing sample. The OBIA provided the best results with the RMSE of 33.77 Mg ha−1 and R2 of 0.81 for the combination of optical and SAR data in the GBM model. Numéro de notice : A2022-331 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/10106049.2022.2071475 Date de publication en ligne : 27/04/2022 En ligne : https://doi.org/10.1080/10106049.2022.2071475 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100607
in Geocarto international > vol 38 n° inconnu [01/01/2023][article]Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image / Taposh Mollick in Remote Sensing Applications: Society and Environment, RSASE, vol 29 (January 2023)PermalinkConsistency assessment of multi-date PlanetScope imagery for seagrass percent cover mapping in different seagrass meadows / Pramaditya Wicaksono in Geocarto international, vol 37 n° 27 ([20/12/2022])PermalinkHyperspectral imagery and urban areas: results of the HYEP project / Christiane Weber in Revue Française de Photogrammétrie et de Télédétection, n° 224 (2022)PermalinkA semi-automatic method for extraction of urban features by integrating aerial images and LIDAR data and comparing its performance in areas with different feature structures (case study: comparison of the method performance in Isfahan and Toronto) / Masoud Azad in Applied geomatics, vol 14 n° 4 (December 2022)Permalink3D target detection using dual domain attention and SIFT operator in indoor scenes / Hanshuo Zhao in The Visual Computer, vol 38 n° 11 (November 2022)PermalinkImproving deep learning on point cloud by maximizing mutual information across layers / Di Wang in Pattern recognition, vol 131 (November 2022)PermalinkA joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)PermalinkMachine learning and landslide studies: recent advances and applications / Faraz S. Tehrani in Natural Hazards, vol 114 n° 2 (November 2022)PermalinkLand use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information / Ozlem Akar in Geocarto international, vol 37 n° 22 ([10/10/2022])PermalinkA relation-augmented embedded graph attention network for remote sensing object detection / Shu Tian in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkRiparian ecosystems mapping at fine scale: a density approach based on multi-temporal UAV photogrammetric point clouds / Elena Belcore in Remote sensing in ecology and conservation, vol 8 n° 5 (October 2022)PermalinkMapping individual abandoned houses across cities by integrating VHR remote sensing and street view imagery / Shengyuan Zou in International journal of applied Earth observation and geoinformation, vol 113 (September 2022)PermalinkStructured binary neural networks for image recognition / Bohan Zhuang in International journal of computer vision, vol 130 n° 9 (September 2022)PermalinkComparison of PBIA and GEOBIA classification methods in classifying turbidity in reservoirs / Douglas Stefanello Facco in Geocarto international, vol 37 n° 16 ([15/08/2022])PermalinkEffective CBIR based on hybrid image features and multilevel approach / D. Latha in Multimedia tools and applications, vol 81 n° 20 (August 2022)PermalinkTracking annual dynamics of mangrove forests in mangrove National Nature Reserves of China based on time series Sentinel-2 imagery during 2016–2020 / Rong Zhang in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)PermalinkExploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)PermalinkInvestigating the ability to identify new constructions in urban areas using images from unmanned aerial vehicles, Google Earth, and Sentinel-2 / Fahime Arabi Aliabad in Remote sensing, vol 14 n° 13 (July-1 2022)PermalinkStreet-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkEstimating feature extraction changes of Berkelah Forest, Malaysia from multisensor remote sensing data using and object-based technique / Syaza Rozali in Geocarto international, vol 37 n° 11 ([15/06/2022])Permalink