Descripteur
Termes descripteurs IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |


Etendre la recherche sur niveau(x) vers le bas
Automated street tree inventory using mobile LiDAR point clouds based on Hough transform and active contours / Amir Hossein Safaie in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
![]()
[article]
Titre : Automated street tree inventory using mobile LiDAR point clouds based on Hough transform and active contours Type de document : Article/Communication Auteurs : Amir Hossein Safaie, Auteur ; Heidar Rastiveis, Auteur ; Alireza Shams, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 19 - 34 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] arbre remarquable
[Termes descripteurs IGN] arbre urbain
[Termes descripteurs IGN] détection d'arbres
[Termes descripteurs IGN] détection de contours
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] inventaire forestier (techniques et méthodes)
[Termes descripteurs IGN] sécurité routière
[Termes descripteurs IGN] semis de points
[Termes descripteurs IGN] tessellation
[Termes descripteurs IGN] transformation de HoughRésumé : (auteur) Trees are important road-side objects, and their geometric information plays an essential role in road studies and safety analyses. This paper proposes an efficient method for the automated creation of a road-side tree inventory using Mobile Terrestrial Lidar System (MTLS) point clouds. In the proposed method ground points are filtered through preprocessing to reduce processing time. Next, tree trunks are detected by performing a Hough Transform (HT) algorithm on several generated raster images from the point clouds. By initiating an approximate area of a tree’s foliage through a Voronoi Tessellation (VT) algorithm, the accurate boundary of the foliage is identified by applying Active Contour (AC) models. By extracting the points within this foliage boundary the geometric characteristics of each tree are obtained. This method was evaluated with two sample point clouds from different MTLS systems, and the algorithm correctly extracted all of the trees from both datasets. Additionally, comparing the calculated parameters with manually observed measures, the accuracy of the obtained geometric parameters were promising. Numéro de notice : A2021-206 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.026 date de publication en ligne : 14/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.026 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97183
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 19 - 34[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
![]()
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] carte agricole
[Termes descripteurs IGN] Citrus sinensis
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] comptage
[Termes descripteurs IGN] cultures
[Termes descripteurs IGN] détection d'objet
[Termes descripteurs IGN] extraction de la végétation
[Termes descripteurs IGN] gestion durable
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] maïs (céréale)
[Termes descripteurs IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data / Xiao-Ming Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
![]()
[article]
Titre : Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data Type de document : Article/Communication Auteurs : Xiao-Ming Li, Auteur ; Yan Sun, Auteur ; Qiang Zhang, Auteur Année de publication : 2021 Article en page(s) : pp 3040 - 3053 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes descripteurs IGN] Arctique, océan
[Termes descripteurs IGN] classification non dirigée
[Termes descripteurs IGN] classification par séparateurs à vaste marge
[Termes descripteurs IGN] données d'apprentissage
[Termes descripteurs IGN] entropie
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] glace de mer
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] image Sentinel-SAR
[Termes descripteurs IGN] matrice de co-occurrence
[Termes descripteurs IGN] niveau de gris (image)
[Termes descripteurs IGN] polarisation croisée
[Termes descripteurs IGN] rétrodiffusion
[Termes descripteurs IGN] textureRésumé : (auteur) In this article, we focus on developing a novel method to extract sea ice cover (i.e., discrimination/classification of sea ice and open water) using Sentinel-1 (S1) cross-polarization [vertical–horizontal (VH) or horizontal–vertical (HV)] data in extra-wide (EW) swath mode based on the support vector machine (SVM) method. The classification basis includes the S1 radar backscatter and texture features, which are calculated from S1 data using the gray level co-occurrence matrix (GLCM). Different from previous methods where appropriate samples are manually selected to train the SVM to classify sea ice and open water, we proposed a method of unsupervised generation of the training samples based on two GLCM texture features, i.e., entropy and homogeneity, that have contrasting characteristics on sea ice and open water. We eliminate the most uncertainty of selecting training samples in machine learning and achieve automatic classification of sea ice and open water by using S1 EW data. The comparisons based on a few cases show good agreements between the synthetic aperture radar (SAR)-derived sea ice cover using the proposed method and visual inspections, of which the accuracy reaches approximately 90%–95%. Besides this, compared with the analyzed sea ice cover data Ice Mapping System (IMS) based on 728 S1 EW images, the accuracy of the extracted sea ice cover by using S1 data is more than 80%. Numéro de notice : A2021-284 Affiliation des auteurs : non IGN Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3007789 date de publication en ligne : 20/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3007789 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97392
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3040 - 3053[article]A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection / Xi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
![]()
[article]
Titre : A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection Type de document : Article/Communication Auteurs : Xi Wu, Auteur ; Zhenwei Shi, Auteur ; Zhengxia Zou, Auteur Année de publication : 2021 Article en page(s) : pp 87 - 104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] altitude
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] détection des nuages
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] fusion de données
[Termes descripteurs IGN] image Gaofen
[Termes descripteurs IGN] information géographique
[Termes descripteurs IGN] latitude
[Termes descripteurs IGN] longitude
[Termes descripteurs IGN] modèle statistique
[Termes descripteurs IGN] neige
[Termes descripteurs IGN] Normalized Difference Snow IndexRésumé : (auteur) Geographic information such as the altitude, latitude, and longitude are common but fundamental meta-records in remote sensing image products. In this paper, it is shown that such a group of records provides important priors for cloud and snow detection in remote sensing imagery. The intuition comes from some common geographical knowledge, where many of them are important but are often overlooked. For example, it is generally known that snow is less likely to exist in low-latitude or low-altitude areas, and clouds in different geographic may have various visual appearances. Previous cloud and snow detection methods simply ignore the use of such information, and perform detection solely based on the image data (band reflectance). Due to the neglect of such priors, most of these methods are difficult to obtain satisfactory performance in complex scenarios (e.g., cloud-snow coexistence). In this paper, a novel neural network called “Geographic Information-driven Network (GeoInfoNet)” is proposed for cloud and snow detection. In addition to the use of the image data, the model integrates the geographic information at both training and detection phases. A “geographic information encoder” is specially designed, which encodes the altitude, latitude, and longitude of imagery to a set of auxiliary maps and then feeds them to the detection network. The proposed network can be trained in an end-to-end fashion with dense robust features extracted and fused. A new dataset called “Levir_CS” for cloud and snow detection is built, which contains 4,168 Gaofen-1 satellite images and corresponding geographical records, and is over 20× larger than other datasets in this field. On “Levir_CS”, experiments show that the method achieves 90.74% intersection over union of cloud and 78.26% intersection over union of snow. It outperforms other state of the art cloud and snow detection methods with a large margin. Feature visualizations also show that the method learns some important priors which is close to the common sense. The proposed dataset and the code of GeoInfoNet are available in https://github.com/permanentCH5/GeoInfoNet. Numéro de notice : A2021-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.023 date de publication en ligne : 22/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.023 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97187
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 87 - 104[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt
[article]
Titre : L'oeil de l'espace Type de document : Article/Communication Auteurs : Anonyme, Auteur Année de publication : 2021 Article en page(s) : pp 45 - 45 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] acquisition d'images
[Termes descripteurs IGN] détection de changement
[Termes descripteurs IGN] détection du bâti
[Termes descripteurs IGN] données localisées
[Termes descripteurs IGN] droit foncier
[Termes descripteurs IGN] image aérienneRésumé : (Auteur) Plus rien n'échappe à la télédétection. S'il est envisageable de se cacher derrière une clôture, ce n'est plus possible depuis le ciel ou l'espace. Numéro de notice : A2021-325 Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtSansCL DOI : sans date de publication en ligne : 07/04/2021 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97483
in Géomètre > n° 2190 (avril 2021) . - pp 45 - 45[article]Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
PermalinkThe delineation of tea gardens from high resolution digital orthoimages using mean-shift and supervised machine learning methods / Akhtar Jamil in Geocarto international, vol 36 n° 7 ([01/04/2021])
PermalinkTree extraction and estimation of walnut structure parameters using airborne LiDAR data / Javier Estornell in International journal of applied Earth observation and geoinformation, vol 96 (April 2021)
PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)
PermalinkBasin-scale high-resolution extraction of drainage networks using 10-m Sentinel-2 imagery / Zifeng Wang in Remote sensing of environment, Vol 255 (March 2021)
PermalinkCharacterizing urban land changes of 30 global megacities using nighttime light time series stacks / Qiming Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)
PermalinkFeature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)
PermalinkLearning from GPS trajectories of floating car for CNN-based urban road extraction with high-resolution satellite imagery / Ju Zhang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)
PermalinkPassive radar imaging of ship targets with GNSS signals of opportunity / Debora Pastina in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)
PermalinkSaline-soil deformation extraction based on an improved time-series InSAR approach / Wei Xiang in ISPRS International journal of geo-information, vol 10 n° 3 (March 2021)
PermalinkAutomatic filtering and 2D modeling of airborne laser scanning building point cloud / Fayez Tarsha-Kurdi in Transactions in GIS, Vol 25 n° 1 (February 2021)
PermalinkCurved buildings reconstruction from airborne LiDAR data by matching and deforming geometric primitives / Jingwei Song in IEEE Transactions on geoscience and remote sensing, vol 59 n° 2 (February 2021)
PermalinkSAR image speckle reduction based on nonconvex hybrid total variation model / Yuli Sun in IEEE Transactions on geoscience and remote sensing, vol 59 n° 2 (February 2021)
PermalinkBuilding extraction from Lidar data using statistical methods / Haval Abdul-Jabbar Sadeq in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 1 (January 2021)
PermalinkConnecting images through time and sources: Introducing low-data, heterogeneous instance retrieval / Dimitri Gominski (2021)
PermalinkExtraction of street pole-like objects based on plane filtering from mobile LiDAR data / Jingming Tu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
PermalinkFuNet: A novel road extraction network with fusion of location data and remote sensing imagery / Kai Zhou in ISPRS International journal of geo-information, vol 10 n° 1 (January 2021)
PermalinkImage matching from handcrafted to deep features: A survey / Jiayi Ma in International journal of computer vision, vol 29 n° 1 (January 2021)
PermalinkLANet: Local attention embedding to improve the semantic segmentation of remote sensing images / Lei Ding in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)
PermalinkRelation-constrained 3D reconstruction of buildings in metropolitan areas from photogrammetric point clouds / Yuan Li in Remote sensing, vol 13 n° 1 (January 2021)
Permalink