Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (653)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data / Xiao-Ming Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data Type de document : Article/Communication Auteurs : Xiao-Ming Li, Auteur ; Yan Sun, Auteur ; Qiang Zhang, Auteur Année de publication : 2021 Article en page(s) : pp 3040 - 3053 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] Arctique, océan
[Termes IGN] classification non dirigée
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] glace de mer
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] matrice de co-occurrence
[Termes IGN] niveau de gris (image)
[Termes IGN] polarisation croisée
[Termes IGN] rétrodiffusion
[Termes IGN] texture d'imageRésumé : (auteur) In this article, we focus on developing a novel method to extract sea ice cover (i.e., discrimination/classification of sea ice and open water) using Sentinel-1 (S1) cross-polarization [vertical–horizontal (VH) or horizontal–vertical (HV)] data in extra-wide (EW) swath mode based on the support vector machine (SVM) method. The classification basis includes the S1 radar backscatter and texture features, which are calculated from S1 data using the gray level co-occurrence matrix (GLCM). Different from previous methods where appropriate samples are manually selected to train the SVM to classify sea ice and open water, we proposed a method of unsupervised generation of the training samples based on two GLCM texture features, i.e., entropy and homogeneity, that have contrasting characteristics on sea ice and open water. We eliminate the most uncertainty of selecting training samples in machine learning and achieve automatic classification of sea ice and open water by using S1 EW data. The comparisons based on a few cases show good agreements between the synthetic aperture radar (SAR)-derived sea ice cover using the proposed method and visual inspections, of which the accuracy reaches approximately 90%–95%. Besides this, compared with the analyzed sea ice cover data Ice Mapping System (IMS) based on 728 S1 EW images, the accuracy of the extracted sea ice cover by using S1 data is more than 80%. Numéro de notice : A2021-284 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3007789 Date de publication en ligne : 20/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3007789 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97392
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3040 - 3053[article]A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection / Xi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection Type de document : Article/Communication Auteurs : Xi Wu, Auteur ; Zhenwei Shi, Auteur ; Zhengxia Zou, Auteur Année de publication : 2021 Article en page(s) : pp 87 - 104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] altitude
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection des nuages
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image Gaofen
[Termes IGN] information géographique
[Termes IGN] latitude
[Termes IGN] longitude
[Termes IGN] modèle statistique
[Termes IGN] neige
[Termes IGN] Normalized Difference Snow IndexRésumé : (auteur) Geographic information such as the altitude, latitude, and longitude are common but fundamental meta-records in remote sensing image products. In this paper, it is shown that such a group of records provides important priors for cloud and snow detection in remote sensing imagery. The intuition comes from some common geographical knowledge, where many of them are important but are often overlooked. For example, it is generally known that snow is less likely to exist in low-latitude or low-altitude areas, and clouds in different geographic may have various visual appearances. Previous cloud and snow detection methods simply ignore the use of such information, and perform detection solely based on the image data (band reflectance). Due to the neglect of such priors, most of these methods are difficult to obtain satisfactory performance in complex scenarios (e.g., cloud-snow coexistence). In this paper, a novel neural network called “Geographic Information-driven Network (GeoInfoNet)” is proposed for cloud and snow detection. In addition to the use of the image data, the model integrates the geographic information at both training and detection phases. A “geographic information encoder” is specially designed, which encodes the altitude, latitude, and longitude of imagery to a set of auxiliary maps and then feeds them to the detection network. The proposed network can be trained in an end-to-end fashion with dense robust features extracted and fused. A new dataset called “Levir_CS” for cloud and snow detection is built, which contains 4,168 Gaofen-1 satellite images and corresponding geographical records, and is over 20× larger than other datasets in this field. On “Levir_CS”, experiments show that the method achieves 90.74% intersection over union of cloud and 78.26% intersection over union of snow. It outperforms other state of the art cloud and snow detection methods with a large margin. Feature visualizations also show that the method learns some important priors which is close to the common sense. The proposed dataset and the code of GeoInfoNet are available in https://github.com/permanentCH5/GeoInfoNet. Numéro de notice : A2021-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.023 Date de publication en ligne : 22/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.023 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97187
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 87 - 104[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A novel class-specific object-based method for urban change detection using high-resolution remote sensing imagery / Ting Bai in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)
[article]
Titre : A novel class-specific object-based method for urban change detection using high-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Ting Bai, Auteur ; Kaimin Sun, Auteur ; Wenzhuo Li, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 249-262 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] changement d'occupation du sol
[Termes IGN] classe d'objets
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] milieu urbain
[Termes IGN] segmentation multi-échelleRésumé : (Auteur) A single-scale object-based change-detection classifier can distinguish only global changes in land cover, not the more granular and local changes in urban areas. To overcome this issue, a novel class-specific object-based change-detection method is proposed. This method includes three steps: class-specific scale selection, class-specific classifier selection, and land cover change detection. The first step combines multi-resolution segmentation and a random forest to select the optimal scale for each change type in land cover. The second step links multi-scale hierarchical sampling with a classifier such as random forest, support vector machine, gradient-boosting decision tree, or Adaboost; the algorithm automatically selects the optimal classifier for each change type in land cover. The final step employs the optimal classifier to detect binary changes and from-to changes for each change type in land cover. To validate the proposed method, we applied it to two high-resolution data sets in urban areas and compared the change-detection results of our proposed method with that of principal component analysis k-means, object-based change vector analysis, and support vector machine. The experimental results show that our proposed method is more accurate than the other methods. The proposed method can address the high levels of complexity found in urban areas, although it requires historical land cover maps as auxiliary data. Numéro de notice : A2021-332 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.4.249 Date de publication en ligne : 01/04/2021 En ligne : https://doi.org/10.14358/PERS.87.4.249 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97528
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 4 (April 2021) . - pp 249-262[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021041 SL Revue Centre de documentation Revues en salle Disponible L'oeil de l'espace / Anonyme in Géomètre, n° 2190 (avril 2021)
[article]
Titre : L'oeil de l'espace Type de document : Article/Communication Auteurs : Anonyme, Auteur Année de publication : 2021 Article en page(s) : pp 45 - 45 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] acquisition d'images
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] données localisées
[Termes IGN] droit foncier
[Termes IGN] image aérienneRésumé : (Auteur) Plus rien n'échappe à la télédétection. S'il est envisageable de se cacher derrière une clôture, ce n'est plus possible depuis le ciel ou l'espace. Numéro de notice : A2021-325 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtSansCL DOI : sans Date de publication en ligne : 07/04/2021 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97483
in Géomètre > n° 2190 (avril 2021) . - pp 45 - 45[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 063-2021041 RAB Revue Centre de documentation En réserve L003 Disponible Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkA skyline-based approach for mobile augmented reality / Mehdi Ayadi in The Visual Computer, vol 37 n° 4 (April 2021)PermalinkTree extraction and estimation of walnut structure parameters using airborne LiDAR data / Javier Estornell in International journal of applied Earth observation and geoinformation, vol 96 (April 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkBasin-scale high-resolution extraction of drainage networks using 10-m Sentinel-2 imagery / Zifeng Wang in Remote sensing of environment, Vol 255 (March 2021)PermalinkCharacterizing urban land changes of 30 global megacities using nighttime light time series stacks / Qiming Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkFeature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)PermalinkLearning from GPS trajectories of floating car for CNN-based urban road extraction with high-resolution satellite imagery / Ju Zhang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkPassive radar imaging of ship targets with GNSS signals of opportunity / Debora Pastina in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)Permalink