Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > analyse d'image orientée objet > détection d'objet
détection d'objetVoir aussi |
Documents disponibles dans cette catégorie (219)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery / Maryam Hosseini in Computers, Environment and Urban Systems, vol 101 (April 2023)
[article]
Titre : Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery Type de document : Article/Communication Auteurs : Maryam Hosseini, Auteur ; Andres Sevtsuk, Auteur ; Fabio Miranda, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 101950 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] détection d'objet
[Termes IGN] Etats-Unis
[Termes IGN] image aérienne
[Termes IGN] navigation pédestre
[Termes IGN] segmentation sémantique
[Termes IGN] système d'information géographique
[Termes IGN] trottoir
[Termes IGN] vision par ordinateurRésumé : (auteur) While cities around the world are increasingly promoting streets and public spaces that prioritize pedestrians over vehicles, significant data gaps have made pedestrian mapping, analysis, and modeling challenging to carry out. Most cities, even in industrialized economies, still lack information about the location and connectivity of their sidewalks, making it difficult to implement research on pedestrian infrastructure and holding the technology industry back from developing accurate, location-based Apps for pedestrians, wheelchair users, street vendors, and other sidewalk users. To address this gap, we have designed and implemented an end-to-end open-source tool— Tile2Net —for extracting sidewalk, crosswalk, and footpath polygons from orthorectified aerial imagery using semantic segmentation. The segmentation model, trained on aerial imagery from Cambridge, MA, Washington DC, and New York City, offers the first open-source scene classification model for pedestrian infrastructure from sub-meter resolution aerial tiles, which can be used to generate planimetric sidewalk data in North American cities. Tile2Net also generates pedestrian networks from the resulting polygons, which can be used to prepare datasets for pedestrian routing applications. The work offers a low-cost and scalable data collection methodology for systematically generating sidewalk network datasets, where orthorectified aerial imagery is available, contributing to over-due efforts to equalize data opportunities for pedestrians, particularly in cities that lack the resources necessary to collect such data using more conventional methods. Numéro de notice : A2023-187 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.compenvurbsys.2023.101950 Date de publication en ligne : 22/02/2023 En ligne : https://doi.org/10.1016/j.compenvurbsys.2023.101950 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102961
in Computers, Environment and Urban Systems > vol 101 (April 2023) . - n° 101950[article]Point cloud data processing optimization in spectral and spatial dimensions based on multispectral Lidar for urban single-wood extraction / Shuo Shi in ISPRS International journal of geo-information, vol 12 n° 3 (March 2023)
[article]
Titre : Point cloud data processing optimization in spectral and spatial dimensions based on multispectral Lidar for urban single-wood extraction Type de document : Article/Communication Auteurs : Shuo Shi, Auteur ; Xingtao Tang, Auteur ; Bowen Chen, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 90 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse spectrale
[Termes IGN] arbre urbain
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Houston (Texas)
[Termes IGN] interpolation
[Termes IGN] réflectance spectrale
[Termes IGN] segmentation
[Termes IGN] semis de pointsRésumé : (auteur) Lidar can effectively obtain three-dimensional information on ground objects. In recent years, lidar has developed rapidly from single-wavelength to multispectral hyperspectral imaging. The multispectral airborne lidar Optech Titan is the first commercial system that can collect point cloud data on 1550, 1064, and 532 nm channels. This study proposes a method of point cloud segmentation in the preprocessed intensity interpolation process to solve the problem of inaccurate intensity at the boundary during point cloud interpolation. The entire experiment consists of three steps. First, a multispectral lidar point cloud is obtained using point cloud segmentation and intensity interpolation; the spatial dimension advantage of the multispectral point cloud is used to improve the accuracy of spectral information interpolation. Second, point clouds are divided into eight categories by constructing geometric information, spectral reflectance information, and spectral characteristics. Accuracy evaluation and contribution analysis are also conducted through point cloud truth value and classification results. Lastly, the spatial dimension information is enhanced by point cloud drop sampling, the method is used to solve the error caused by airborne scanning and single-tree extraction of urban trees. Classification results showed that point cloud segmentation before intensity interpolation can effectively improve the interpolation and classification accuracies. The total classification accuracy of the data is improved by 3.7%. Compared with the extraction result (377) of single wood without subsampling treatment, the result of the urban tree extraction proved the effectiveness of the proposed method with a subsampling algorithm in improving the accuracy. Accordingly, the problem of over-segmentation is solved, and the final single-wood extraction result (329) is markedly consistent with the real situation of the region. Numéro de notice : A2023-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi12030090 Date de publication en ligne : 23/02/2023 En ligne : https://doi.org/10.3390/ijgi12030090 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102852
in ISPRS International journal of geo-information > vol 12 n° 3 (March 2023) . - n° 90[article]
Titre : Artificial intelligence oceanography Type de document : Monographie Auteurs : Xiaofeng Li, Éditeur scientifique ; Fan Wang, Éditeur scientifique Editeur : Springer Nature Année de publication : 2023 Importance : 346 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-981-19637-5-9 Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] algue
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] cyclone
[Termes IGN] détection d'objet
[Termes IGN] iceberg
[Termes IGN] intelligence artificielle
[Termes IGN] océanographie
[Termes IGN] température de surface de la merRésumé : (éditeur) This open access book invites readers to learn how to develop artificial intelligence (AI)-based algorithms to perform their research in oceanography. Various examples are exhibited to guide details of how to feed the big ocean data into the AI models to analyze and achieve optimized results. The number of scholars engaged in AI oceanography research will increase exponentially in the next decade. Therefore, this book will serve as a benchmark providing insights for scholars and graduate students interested in oceanography, computer science, and remote sensing. Note de contenu : 1- Artificial Intelligence Foundation of smart ocean
2- Forecasting tropical instability waves based on artificial intelligence
3- Sea surface height anomaly prediction based on artificial intelligence
4- Satellite data-driven internal solitary wave forecast based on machine learning techniques
5- AI-based subsurface thermohaline structure retrieval from remote sensing observations
6- Ocean heat content retrieval from remote sensing data based on machine learning
7- Detecting tropical cyclogenesis using broad learning system from satellite passive microwave observations
8- Tropical cyclone monitoring based on geostationary satellite imagery
9- Reconstruction of pCO2 data in the Southern ocean based on feedforward neural network
10- Detection and analysis of mesoscale eddies based on deep learning
11- Deep convolutional neural networks-based coastal inundation mapping from SAR imagery: with one application case for Bangladesh, a UN-defined least developed country
12- Sea ice detection from SAR images based on deep fully convolutional networks
13- Detection and analysis of marine green algae based on artificial intelligence
14- Automatic waterline extraction of large-scale tidal flats from SAR images based on deep convolutional neural networks
15- Extracting ship’s size from SAR images by deep learning
16- Benthic organism detection, quantification and seamount biology detection based on deep learningNuméro de notice : 24105 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Monographie DOI : 10.1007/978-981-19-6375-9 En ligne : https://link.springer.com/book/10.1007/978-981-19-6375-9 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103058 Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]Hyperspectral imagery and urban areas: results of the HYEP project / Christiane Weber in Revue Française de Photogrammétrie et de Télédétection, n° 224 (2022)
[article]
Titre : Hyperspectral imagery and urban areas: results of the HYEP project Type de document : Article/Communication Auteurs : Christiane Weber, Auteur ; Xavier Briottet , Auteur ; Thomas Houet, Auteur ; Sébastien Gadal, Auteur ; Rahim Aguejdad, Auteur ; Yannick Deville, Auteur ; Mauro Dalla Mura, Auteur ; Clément Mallet , Auteur ; Arnaud Le Bris , Auteur ; et al., Auteur Année de publication : 2022 Projets : HYEP / Weber, Christiane Article en page(s) : pp 75 - 92 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse comparative
[Termes IGN] détection d'objet
[Termes IGN] fusion d'images
[Termes IGN] image hyperspectrale
[Termes IGN] Lituanie
[Termes IGN] milieu urbain
[Termes IGN] panneau photovoltaïque
[Termes IGN] surface imperméable
[Termes IGN] ToulouseRésumé : (Auteur) The HYEP project (ANR HYEP 14-CE22-0016-01 Hyperspectral imagery for Environmental urban Planning - Mobility and Urban Systems Programme - 2014) confirmed the interest of a global approach to the urban environment by remote sensing and in particular by using hyperspectral imaging (HI). The interest of hyperspectral images lies in the range of information provided over wavelengths from 0.4 to 4 μm; they thus provide access to spectral quantities of interest and to chemical or biophysical parameters of the surface. HYEP's objective was to specify this and to propose a panel of methods and treatments taking into account the characteristics of other existing sensors in order to compare their performance. The developments carried out were applied and evaluated on hyperspectral airborne images acquired in Toulouse and Kaunas (Lithuania), also used to synthesize space systems: Sentinel-2, Hypxim/Biodiversity and Pleiades. Among the locks identified were those related to improving the spatial capabilities of the sensors and spatial scale changes, which were partially overcome by fusion and sharpening approaches, which proved to be successful. After a description of our hyperspectral data set acquired over Toulouse, an analysis is conducted on several existing and accessible spectral databases. Then, the chosen methods are presented. They include extraction, fusion and classification methods, which are then applied on our dataset synthetized at different spatial resolution to evaluate the benefits and the complementarity of hyperspectral imagery in comparison with other traditional sensors. Some specific applications are investigated of interest for urban planners: impervious soil map, vegetation species cartography and detection of solar panels. Finally, discussion and perspectives are presented. Numéro de notice : A2022-941 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : Hal Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : 10.52638/rfpt.2022.589 Date de publication en ligne : 22/12/2022 En ligne : https://dx.doi.org/10.52638/rfpt.2022.589 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102831
in Revue Française de Photogrammétrie et de Télédétection > n° 224 (2022) . - pp 75 - 92[article]A semi-automatic method for extraction of urban features by integrating aerial images and LIDAR data and comparing its performance in areas with different feature structures (case study: comparison of the method performance in Isfahan and Toronto) / Masoud Azad in Applied geomatics, vol 14 n° 4 (December 2022)Permalink3D target detection using dual domain attention and SIFT operator in indoor scenes / Hanshuo Zhao in The Visual Computer, vol 38 n° 11 (November 2022)PermalinkImproving deep learning on point cloud by maximizing mutual information across layers / Di Wang in Pattern recognition, vol 131 (November 2022)PermalinkA relation-augmented embedded graph attention network for remote sensing object detection / Shu Tian in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkRiparian ecosystems mapping at fine scale: a density approach based on multi-temporal UAV photogrammetric point clouds / Elena Belcore in Remote sensing in ecology and conservation, vol 8 n° 5 (October 2022)PermalinkMapping individual abandoned houses across cities by integrating VHR remote sensing and street view imagery / Shengyuan Zou in International journal of applied Earth observation and geoinformation, vol 113 (September 2022)PermalinkStructured binary neural networks for image recognition / Bohan Zhuang in International journal of computer vision, vol 130 n° 9 (September 2022)PermalinkExploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)PermalinkStreet-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkAdversarial defenses for object detectors based on Gabor convolutional layers / Abdollah Amirkhani in The Visual Computer, vol 38 n° 6 (June 2022)Permalink