Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (690)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation / Heyang Thomas Li in The Visual Computer, vol 38 n° 5 (May 2022)
[article]
Titre : 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation Type de document : Article/Communication Auteurs : Heyang Thomas Li, Auteur ; Zachary Todd, Auteur ; Nikolas Bielski, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1759 - 1774 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] espace image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] route
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) The classification and extraction of road markings and lanes are of critical importance to infrastructure assessment, planning and road safety. We present a pipeline for the accurate segmentation and extraction of rural road surface objects in 3D lidar point-cloud, as well as a method to extract geometric parameters belonging to tar seal. To decrease the computational resources needed, the point-clouds were aggregated into a 2D image space before being transformed using affine transformations. The Mask R-CNN algorithm is then applied to the transformed image space to localize, segment and classify the road objects. The segmentation results for road surfaces and markings can then be used for geometric parameter estimation such as road widths estimation, while the segmentation results show that the efficacy of the existing Mask R-CNN to segment needle-type objects is improved by our proposed transformations. Numéro de notice : A2022-376 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-021-02103-8 Date de publication en ligne : 28/06/2021 En ligne : https://doi.org/10.1007/s00371-021-02103-8 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100627
in The Visual Computer > vol 38 n° 5 (May 2022) . - pp 1759 - 1774[article]A context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : A context feature enhancement network for building extraction from high-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Jinzhi Chen, Auteur ; Dejun Zhang, Auteur ; Yiqi Wu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2276 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] structure-from-motionRésumé : (auteur) The complexity and diversity of buildings make it challenging to extract low-level and high-level features with strong feature representation by using deep neural networks in building extraction tasks. Meanwhile, deep neural network-based methods have many network parameters, which take up a lot of memory and time in training and testing. We propose a novel fully convolutional neural network called the Context Feature Enhancement Network (CFENet) to address these issues. CFENet comprises three modules: the spatial fusion module, the focus enhancement module, and the feature decoder module. First, the spatial fusion module aggregates the spatial information of low-level features to obtain buildings’ outline and edge information. Secondly, the focus enhancement module fully aggregates the semantic information of high-level features to filter the information of building-related attribute categories. Finally, the feature decoder module decodes the output of the above two modules to segment the buildings more accurately. In a series of experiments on the WHU Building Dataset and the Massachusetts Building Dataset, our CFENet balances efficiency and accuracy compared to the other four methods we compared, and achieves optimality on all five evaluation metrics: PA, PC, F1, IoU, and FWIoU. This indicates that CFENet can effectively enhance and fuse buildings’ low-level and high-level features, improving building extraction accuracy. Numéro de notice : A2022-385 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14092276 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.3390/rs14092276 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100663
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2276[article]A continuous change tracker model for remote sensing time series reconstruction / Yangjian Zhang in Remote sensing, vol 14 n° 9 (May-1 2022)
[article]
Titre : A continuous change tracker model for remote sensing time series reconstruction Type de document : Article/Communication Auteurs : Yangjian Zhang, Auteur ; Li Wang, Auteur ; Yuanhuizi He, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2280 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de filtrage
[Termes IGN] analyse harmonique
[Termes IGN] compression d'image
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] Leaf Area Index
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] phénologie
[Termes IGN] production primaire brute
[Termes IGN] reconstruction d'image
[Termes IGN] réflectance de surface
[Termes IGN] série temporelleRésumé : (auteur) It is hard for current time series reconstruction methods to achieve the balance of high-precision time series reconstruction and explanation of the model mechanism. The goal of this paper is to improve the reconstruction accuracy with a well-explained time series model. Thus, we developed a function-based model, the CCTM (Continuous Change Tracker Model) model, that can achieve high precision in time series reconstruction by tracking the time series variation rate. The goal of this paper is to provide a new solution for high-precision time series reconstruction and related applications. To test the reconstruction effects, the model was applied to four types of datasets: normalized difference vegetation index (NDVI), gross primary productivity (GPP), leaf area index (LAI), and MODIS surface reflectance (MSR). Several new observations are as follows. First, the CCTM model is well explained and based on the second-order derivative theorem, which divides the yearly time series into four variation types including uniform variations, decelerated variations, accelerated variations, and short-periodical variations, and each variation type is represented by a designed function. Second, the CCTM model provides much better reconstruction results than the Harmonic model on the NDVI, GPP, MSR, and LAI datasets for the seasonal segment reconstruction. The combined use of the Savitzky–Golay filter and the CCTM model is better than the combinations of the Savitzky–Golay filter with other models. Third, the Harmonic model has the best trend-fitting ability on the yearly time series dataset, with the highest R-Square and the lowest RMSE among the four function fitting models. However, with seasonal piecewise fitting, the four models all achieved high accuracy, and the CCTM performs the best. Fourth, the CCTM model should also be applied to time series image compression, two compression patterns with 24 coefficients and 6 coefficients respectively are proposed. The daily MSR dataset can achieve a compression ratio of 15 by using the 6-coefficients method. Finally, the CCTM model also has the potential to be applied to change detection, trend analysis, and phenology and seasonal characteristics extractions. Numéro de notice : A2022-384 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14092280 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.3390/rs14092280 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100662
in Remote sensing > vol 14 n° 9 (May-1 2022) . - n° 2280[article]Efficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)
[article]
Titre : Efficient convolutional neural architecture search for LiDAR DSM classification Type de document : Article/Communication Auteurs : Aili Wang, Auteur ; Dong Xue, Auteur ; Haibin Wu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5703317 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] modèle de transfert radiatif
[Termes IGN] modèle numérique de surface
[Termes IGN] précision de la classification
[Termes IGN] semis de pointsRésumé : (auteur) Light detection and ranging (LiDAR) data provide rich elevation information, so it plays an irreplaceable role in ground object classification. Recently, convolutional neural networks (CNNs) have shown excellent performance in LiDAR digital surface models (DSMs) classification. However, the architecture of CNN model relies heavily on manual design, so it has great limitations. In addition, different sensors capture LiDAR datasets with different properties, so the model should be designed to suit for different datasets, which further increases the workload of architecture design. Therefore, this article proposes a method of automatic design of LiDAR DSM classification model. First, attention mechanism is introduced into search space to improve the feature extraction capability of the model. Then, a gradient-based search strategy is used to obtain the optimal architecture from this search space. Second, a learning rate adjustment strategy is proposed to reduce the time spent in the search stage and evaluation stage to improve the classification accuracy of the model. Finally, a regularization scheme is introduced to enhance the robustness of the model and avoid overfitting. Experimental results on three public LiDAR datasets (Bayview Park, Recology, and Houston) obtained from different sensors show that the proposed neural architecture search method achieves the impressive classification performance compared to several state-of-the-art classification methods and improves the classification accuracy under the condition of limited training samples. Numéro de notice : A2022-408 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3171520 Date de publication en ligne : 02/05/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3171520 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100742
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 5 (May 2022) . - n° 5703317[article]Framework for automatic coral reef extraction using Sentinel-2 image time series / Qizhi Zhang in Marine geodesy, vol 45 n° 3 (May 2022)
[article]
Titre : Framework for automatic coral reef extraction using Sentinel-2 image time series Type de document : Article/Communication Auteurs : Qizhi Zhang, Auteur ; Jian Zhang, Auteur ; Liang Cheng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 195 - 231 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Chine
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtrage de points
[Termes IGN] filtrage spatiotemporel
[Termes IGN] image Sentinel-MSI
[Termes IGN] mesure de similitude
[Termes IGN] nébulosité
[Termes IGN] récif corallien
[Termes IGN] série temporelleRésumé : (auteur) Using supervised and unsupervised classification on a single image to extract coral reef extent results in missing data and wrong extraction results. To improve the accuracy of coral reef extraction, this study proposes a novel technical framework for automatic coral reef extraction based on an image filtering strategy and spatiotemporal similarity measurements of pixel-level Sentinel-2 image time series. This method was applied to the Anda Reef, Daxian Reef, and Nanhua Reef, China, using 1464 Sentinel-2 images obtained from 2015–2020. Sentinel-2 images were automatically selected considering space, time, cloud cover, and image entropy after atmospheric correction. With the binary classification measurement standard using the digitization coral reef results of the Sentinel-2 images as the true value, the time series established by the modified normalized difference water index demonstrated high robustness and accuracy. Analyzing the time series curves of the coral reef and deep water verified that the spatiotemporal similarity measurement of this framework can stably extract the boundaries of the coral reef. Numéro de notice : A2022-353 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/01490419.2022.2051648 Date de publication en ligne : 28/03/2022 En ligne : https://doi.org/10.1080/01490419.2022.2051648 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100550
in Marine geodesy > vol 45 n° 3 (May 2022) . - pp 195 - 231[article]Fusion of optical, radar and waveform LiDAR observations for land cover classification / Huiran Jin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)PermalinkHuman cognition based framework for detecting roads from remote sensing images / Naveen Chandra in Geocarto international, vol 37 n° 8 ([01/05/2022])PermalinkRevising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkExploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)PermalinkMining crowdsourced trajectory and geo-tagged data for spatial-semantic road map construction / Jincai Huang in Transactions in GIS, vol 26 n° 2 (April 2022)PermalinkSpecies level classification of Mediterranean sparse forests-maquis formations using Sentinel-2 imagery / Semiha Demirbaş Çağlayana in Geocarto international, vol 37 n° 6 ([01/04/2022])PermalinkAutomatic extraction of building geometries based on centroid clustering and contour analysis on oblique images taken by unmanned aerial vehicles / Leilei Zhang in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)PermalinkComparaison des images satellite et aériennes dans le domaine de la détection d’obstacles à la navigation aérienne et de leur mise à jour / Olivier de Joinville in XYZ, n° 170 (mars 2022)PermalinkExtraction from high-resolution remote sensing images based on multi-scale segmentation and case-based reasoning / Jun Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 3 (March 2022)PermalinkTraffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)PermalinkVisual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)PermalinkMulti-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)PermalinkBuilding footprint extraction in Yangon city from monocular optical satellite image using deep learning / Hein Thura Aung in Geocarto international, vol 37 n° 3 ([01/02/2022])PermalinkA combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)PermalinkPCEDNet: a lightweight neural network for fast and interactive edge detection in 3D point clouds / Chems-Eddine Himeur in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)PermalinkSiamese Adversarial Network for image classification of heavy mineral grains / Huizhen Hao in Computers & geosciences, vol 159 (February 2022)PermalinkThree-Dimensional point cloud analysis for building seismic damage information / Fan Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 2 (February 2022)PermalinkAutomatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi / Yafei Jing in Remote sensing, vol 14 n° 2 (January-2 2022)PermalinkAttributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision / Anis Amziane (2022)PermalinkContribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)PermalinkDeep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)PermalinkPermalinkPermalinkDevelopment of object detectors for satellite images by deep learning / Alissa Kouraeva (2022)PermalinkEffective triplet mining improves training of multi-scale pooled CNN for image retrieval / Federico Vaccaro in Machine Vision and Applications, vol 33 n° 1 (January 2022)PermalinkPermalinkPermalinkHistograms of oriented mosaic gradients for snapshot spectral image description / Lulu Chen in ISPRS Journal of photogrammetry and remote sensing, vol 183 (January 2022)PermalinkMulti-view urban scene classification with a complementary-information learning model / Wanxuan Geng in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 1 (January 2022)PermalinkPermalinkPermalinkPermalinkEfficient occluded road extraction from high-resolution remote sensing imagery / Dejun Feng in Remote sensing, vol 13 n° 24 (December-2 2021)PermalinkAutomatic registration of mobile mapping system Lidar points and panoramic-image sequences by relative orientation model / Ningning Zhu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 12 (December 2021)PermalinkBuilding detection with convolutional networks trained with transfer learning / Simon Šanca in Geodetski vestnik, vol 65 n° 4 (December 2021 - February 2022)PermalinkDiResNet: Direction-aware residual network for road extraction in VHR remote sensing images / Lei Ding in IEEE Transactions on geoscience and remote sensing, vol 59 n° 12 (December 2021)PermalinkFlexible Gabor-based superpixel-level unsupervised LDA for hyperspectral image classification / Sen Jia in IEEE Transactions on geoscience and remote sensing, vol 59 n° 12 (December 2021)PermalinkMSegnet, a practical network for building detection from high spatial resolution images / Bo Yu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 12 (December 2021)PermalinkMulti-model estimation of forest canopy closure by using red edge bands based on Sentinel-2 images / Yiying Hua in Forests, vol 12 n° 12 (December 2021)PermalinkOBIA-based extraction of artificial terrace damages in the Loess plateau of China from UAV photogrammetry / Xuan Fang in ISPRS International journal of geo-information, vol 10 n° 12 (December 2021)PermalinkParticle swarm optimization based water index (PSOWI) for mapping the water extents from satellite images / Mohammad Hossein Gamshadzaei in Geocarto international, vol 36 n° 20 ([01/12/2021])PermalinkSemi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)PermalinkThe use of Otsu algorithm and multi-temporal airborne LiDAR data to detect building changes in urban space / Renato César Dos santos in Applied geomatics, vol 13 n° 4 (December 2021)PermalinkFootprint size design of large-footprint full-waveform LiDAR for forest and topography applications: A theoretical study / Xuebo Yang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 11 (November 2021)Permalink