Descripteur
Documents disponibles dans cette catégorie (426)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery / Seth Goodman in Transactions in GIS, Vol 25 n° 2 (April 2021)
[article]
Titre : A convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery Type de document : Article/Communication Auteurs : Seth Goodman, Auteur ; Ariel BenYishay, Auteur ; Daniel Runfola, Auteur Année de publication : 2021 Article en page(s) : pp 674 - 691 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] conflit
[Termes IGN] image Landsat-8
[Termes IGN] implémentation (informatique)
[Termes IGN] Nigéria
[Termes IGN] prédiction
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Convolutional neural networks (CNNs) trained with satellite imagery have been successfully used to generate measures of development indicators, such as poverty, in developing nations. This article explores a CNN‐based approach leveraging Landsat 8 imagery to predict locations of conflict‐related deaths. Using Nigeria as a case study, we use the Armed Conflict Location & Event Data (ACLED) dataset to identify locations of conflict events that did or did not result in a death. Imagery for each location is used as an input to train a CNN to distinguish fatal from non‐fatal events. Using 2014 imagery, we are able to predict the result of conflict events in the following year (2015) with 80% accuracy. While our approach does not replace the need for causal studies into the drivers of conflict death, it provides a low‐cost solution to prediction that requires only publicly available imagery to implement. Findings suggest that the information contained in moderate‐resolution imagery can be used to predict the likelihood of a death due to conflict at a given location in Nigeria the following year, and that CNN‐based methods of estimating development‐related indicators may be effective in applications beyond those explored in the literature. Numéro de notice : A2021-361 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12661 Date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1111/tgis.12661 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97625
in Transactions in GIS > Vol 25 n° 2 (April 2021) . - pp 674 - 691[article]Detecting ground deformation in the built environment using sparse satellite InSAR data with a convolutional neural network / Nantheera Anantrasirichai in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Detecting ground deformation in the built environment using sparse satellite InSAR data with a convolutional neural network Type de document : Article/Communication Auteurs : Nantheera Anantrasirichai, Auteur ; Juliet Biggs, Auteur ; Krisztina Kelevitz, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 2940 - 2950 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage automatique
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] déformation de la croute terrestre
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] effet atmosphérique
[Termes IGN] image radar moirée
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] interpolation spatiale
[Termes IGN] matrice
[Termes IGN] optimisation (mathématiques)
[Termes IGN] représentation parcimonieuse
[Termes IGN] Royaume-Uni
[Termes IGN] zone urbaineRésumé : (auteur) The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services. However, simple analysis techniques like thresholding cannot detect and classify complex deformation signals reliably making providing usable information to a broad range of nonexpert stakeholders a challenge. Here, we explore the applicability of deep learning approaches by adapting a pretrained convolutional neural network (CNN) to detect deformation in a national-scale velocity field. For our proof-of-concept, we focus on the U.K. where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides, and tunneling. The sparsity of measurement points and the presence of spike noise make this a challenging application for deep learning networks, which involve calculations of the spatial convolution between images. Moreover, insufficient ground truth data exist to construct a balanced training data set, and the deformation signals are slower and more localized than in previous applications. We propose three enhancement methods to tackle these problems: 1) spatial interpolation with modified matrix completion; 2) a synthetic training data set based on the characteristics of the real U.K. velocity map; and 3) enhanced overwrapping techniques. Using velocity maps spanning 2015–2019, our framework detects several areas of coal mining subsidence, uplift due to dewatering, slate quarries, landslides, and tunnel engineering works. The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems. Numéro de notice : A2021-283 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s12518-020-00323-6 Date de publication en ligne : 31/08/2020 En ligne : https://doi.org/10.1007/s12518-020-00323-6 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97391
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 2940 - 2950[article]Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data / Xiao-Ming Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Extraction of sea ice cover by Sentinel-1 SAR based on support vector machine with unsupervised generation of training data Type de document : Article/Communication Auteurs : Xiao-Ming Li, Auteur ; Yan Sun, Auteur ; Qiang Zhang, Auteur Année de publication : 2021 Article en page(s) : pp 3040 - 3053 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] Arctique, océan
[Termes IGN] classification non dirigée
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] glace de mer
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] matrice de co-occurrence
[Termes IGN] niveau de gris (image)
[Termes IGN] polarisation croisée
[Termes IGN] rétrodiffusion
[Termes IGN] texture d'imageRésumé : (auteur) In this article, we focus on developing a novel method to extract sea ice cover (i.e., discrimination/classification of sea ice and open water) using Sentinel-1 (S1) cross-polarization [vertical–horizontal (VH) or horizontal–vertical (HV)] data in extra-wide (EW) swath mode based on the support vector machine (SVM) method. The classification basis includes the S1 radar backscatter and texture features, which are calculated from S1 data using the gray level co-occurrence matrix (GLCM). Different from previous methods where appropriate samples are manually selected to train the SVM to classify sea ice and open water, we proposed a method of unsupervised generation of the training samples based on two GLCM texture features, i.e., entropy and homogeneity, that have contrasting characteristics on sea ice and open water. We eliminate the most uncertainty of selecting training samples in machine learning and achieve automatic classification of sea ice and open water by using S1 EW data. The comparisons based on a few cases show good agreements between the synthetic aperture radar (SAR)-derived sea ice cover using the proposed method and visual inspections, of which the accuracy reaches approximately 90%–95%. Besides this, compared with the analyzed sea ice cover data Ice Mapping System (IMS) based on 728 S1 EW images, the accuracy of the extracted sea ice cover by using S1 data is more than 80%. Numéro de notice : A2021-284 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3007789 Date de publication en ligne : 20/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3007789 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97392
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3040 - 3053[article]A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection / Xi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection Type de document : Article/Communication Auteurs : Xi Wu, Auteur ; Zhenwei Shi, Auteur ; Zhengxia Zou, Auteur Année de publication : 2021 Article en page(s) : pp 87 - 104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] altitude
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection des nuages
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image Gaofen
[Termes IGN] information géographique
[Termes IGN] latitude
[Termes IGN] longitude
[Termes IGN] modèle statistique
[Termes IGN] neige
[Termes IGN] Normalized Difference Snow IndexRésumé : (auteur) Geographic information such as the altitude, latitude, and longitude are common but fundamental meta-records in remote sensing image products. In this paper, it is shown that such a group of records provides important priors for cloud and snow detection in remote sensing imagery. The intuition comes from some common geographical knowledge, where many of them are important but are often overlooked. For example, it is generally known that snow is less likely to exist in low-latitude or low-altitude areas, and clouds in different geographic may have various visual appearances. Previous cloud and snow detection methods simply ignore the use of such information, and perform detection solely based on the image data (band reflectance). Due to the neglect of such priors, most of these methods are difficult to obtain satisfactory performance in complex scenarios (e.g., cloud-snow coexistence). In this paper, a novel neural network called “Geographic Information-driven Network (GeoInfoNet)” is proposed for cloud and snow detection. In addition to the use of the image data, the model integrates the geographic information at both training and detection phases. A “geographic information encoder” is specially designed, which encodes the altitude, latitude, and longitude of imagery to a set of auxiliary maps and then feeds them to the detection network. The proposed network can be trained in an end-to-end fashion with dense robust features extracted and fused. A new dataset called “Levir_CS” for cloud and snow detection is built, which contains 4,168 Gaofen-1 satellite images and corresponding geographical records, and is over 20× larger than other datasets in this field. On “Levir_CS”, experiments show that the method achieves 90.74% intersection over union of cloud and 78.26% intersection over union of snow. It outperforms other state of the art cloud and snow detection methods with a large margin. Feature visualizations also show that the method learns some important priors which is close to the common sense. The proposed dataset and the code of GeoInfoNet are available in https://github.com/permanentCH5/GeoInfoNet. Numéro de notice : A2021-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.023 Date de publication en ligne : 22/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.023 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97187
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 87 - 104[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Graph convolutional networks by architecture search for PolSAR image classification / Hongying Liu in Remote sensing, vol 13 n° 7 (April-1 2021)PermalinkMachine learning and geodesy: A survey / Jemil Butt in Journal of applied geodesy, vol 15 n° 2 (April 2021)PermalinkParsing of urban facades from 3D point clouds based on a novel multi-view domain / Wei Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)PermalinkPrecipitable water vapor fusion based on a generalized regression neural network / Bao Zhang in Journal of geodesy, vol 95 n° 4 (April 2021)PermalinkRotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkScene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier / Qimin Cheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)PermalinkA shape transformation-based dataset augmentation framework for pedestrian detection / Zhe Chen in International journal of computer vision, vol 129 n° 4 (April 2021)PermalinkUnsupervised pansharpening based on self-attention mechanism / Ying Qu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkUrban heat island formation in greater Cairo: Spatio-temporal analysis of daytime and nighttime land surface temperatures along the urban–rural gradient / Darshana Athukorala in Remote sensing, vol 13 n° 7 (April-1 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkComplémentarité des images optiques Sentinel-2 avec les images radar Sentinel-1 et ALOS-PALSAR-2 pour la cartographie de la couverture végétale : application à une aire protégée et ses environs au Nord-Ouest du Maroc via trois algorithmes d’apprentissage automatique / Siham Acharki in Revue Française de Photogrammétrie et de Télédétection, n° 223 (mars - décembre 2021)PermalinkAnalysis of plot-level volume increment models developed from machine learning methods applied to an uneven-aged mixed forest / Seyedeh Kosar Hamidi in Annals of Forest Science, vol 78 n° 1 (March 2021)PermalinkApplication of a multi-layer artificial neural network in a 3-D global electron density model using the long-term observations of COSMIC, Fengyun-3C, and Digisonde / Li Wang in Space weather, vol 19 n° 3 (March 2021)PermalinkDetection of subpixel targets on hyperspectral remote sensing imagery based on background endmember extraction / Xiaorui Song in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkDynamic human body reconstruction and motion tracking with low-cost depth cameras / Kangkan Wang in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkFeature detection and description for image matching: from hand-crafted design to deep learning / Lin Chen in Geo-spatial Information Science, vol 24 n° 1 (March 2021)PermalinkA graph-based semi-supervised approach to classification learning in digital geographies / Pengyuan Liu in Computers, Environment and Urban Systems, vol 86 (March 2021)PermalinkGraph convolutional autoencoder model for the shape coding and cognition of buildings in maps / Xiongfeng Yan in International journal of geographical information science IJGIS, vol 35 n° 3 (March 2021)PermalinkLearning from GPS trajectories of floating car for CNN-based urban road extraction with high-resolution satellite imagery / Ju Zhang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)Permalink