Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (299)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A BiLSTM-CNN model for predicting users’ next locations based on geotagged social media / Yi Bao in International journal of geographical information science IJGIS, vol 35 n° 4 (April 2021)
[article]
Titre : A BiLSTM-CNN model for predicting users’ next locations based on geotagged social media Type de document : Article/Communication Auteurs : Yi Bao, Auteur ; Zhou Huang, Auteur ; Linna Li, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 639 - 660 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse de groupement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données spatiotemporelles
[Termes IGN] géopositionnement
[Termes IGN] graphe
[Termes IGN] modèle de simulation
[Termes IGN] point d'intérêt
[Termes IGN] réseau social
[Termes IGN] service fondé sur la position
[Termes IGN] utilisateur
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Location prediction based on spatio-temporal footprints in social media is instrumental to various applications, such as travel behavior studies, crowd detection, traffic control, and location-based service recommendation. In this study, we propose a model that uses geotags of social media to predict the potential area containing users’ next locations. In the model, we utilize HiSpatialCluster algorithm to identify clustering areas (CAs) from check-in points. CA is the basic spatial unit for predicting the potential area containing users’ next locations. Then, we use the LINE (Large-scale Information Network Embedding) to obtain the representation vector of each CA. Finally, we apply BiLSTM-CNN (Bidirectional Long Short-Term Memory-Convolutional Neural Network) for location prediction. The results show that the proposed ensemble model outperforms the single LSTM or CNN model. In the case study that identifies 100 CAs out of Weibo check-ins collected in Wuhan, China, the Top-5 predicted areas containing next locations amount to an 80% accuracy. The high accuracy is of great value for recommendation and prediction on areal unit. Numéro de notice : A2021-268 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2020.1808896 Date de publication en ligne : 26/08/2020 En ligne : https://doi.org/10.1080/13658816.2020.1808896 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97324
in International journal of geographical information science IJGIS > vol 35 n° 4 (April 2021) . - pp 639 - 660[article]A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; Diogo Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1 - 17 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] carte agricole
[Termes IGN] Citrus sinensis
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] comptage
[Termes IGN] cultures
[Termes IGN] détection d'objet
[Termes IGN] extraction de la végétation
[Termes IGN] gestion durable
[Termes IGN] image captée par drone
[Termes IGN] maïs (céréale)
[Termes IGN] rendement agricoleRésumé : (auteur) Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems. Numéro de notice : A2021-205 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.024 Date de publication en ligne : 13/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.024 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97171
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Detecting ground deformation in the built environment using sparse satellite InSAR data with a convolutional neural network / Nantheera Anantrasirichai in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Detecting ground deformation in the built environment using sparse satellite InSAR data with a convolutional neural network Type de document : Article/Communication Auteurs : Nantheera Anantrasirichai, Auteur ; Juliet Biggs, Auteur ; Krisztina Kelevitz, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 2940 - 2950 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage automatique
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] déformation de la croute terrestre
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] effet atmosphérique
[Termes IGN] image radar moirée
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] interpolation spatiale
[Termes IGN] matrice
[Termes IGN] optimisation (mathématiques)
[Termes IGN] représentation parcimonieuse
[Termes IGN] Royaume-Uni
[Termes IGN] zone urbaineRésumé : (auteur) The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services. However, simple analysis techniques like thresholding cannot detect and classify complex deformation signals reliably making providing usable information to a broad range of nonexpert stakeholders a challenge. Here, we explore the applicability of deep learning approaches by adapting a pretrained convolutional neural network (CNN) to detect deformation in a national-scale velocity field. For our proof-of-concept, we focus on the U.K. where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides, and tunneling. The sparsity of measurement points and the presence of spike noise make this a challenging application for deep learning networks, which involve calculations of the spatial convolution between images. Moreover, insufficient ground truth data exist to construct a balanced training data set, and the deformation signals are slower and more localized than in previous applications. We propose three enhancement methods to tackle these problems: 1) spatial interpolation with modified matrix completion; 2) a synthetic training data set based on the characteristics of the real U.K. velocity map; and 3) enhanced overwrapping techniques. Using velocity maps spanning 2015–2019, our framework detects several areas of coal mining subsidence, uplift due to dewatering, slate quarries, landslides, and tunnel engineering works. The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems. Numéro de notice : A2021-283 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s12518-020-00323-6 Date de publication en ligne : 31/08/2020 En ligne : https://doi.org/10.1007/s12518-020-00323-6 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97391
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 2940 - 2950[article]A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection / Xi Wu in ISPRS Journal of photogrammetry and remote sensing, vol 174 (April 2021)
[article]
Titre : A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection Type de document : Article/Communication Auteurs : Xi Wu, Auteur ; Zhenwei Shi, Auteur ; Zhengxia Zou, Auteur Année de publication : 2021 Article en page(s) : pp 87 - 104 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] altitude
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection des nuages
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] image Gaofen
[Termes IGN] information géographique
[Termes IGN] latitude
[Termes IGN] longitude
[Termes IGN] modèle statistique
[Termes IGN] neige
[Termes IGN] Normalized Difference Snow IndexRésumé : (auteur) Geographic information such as the altitude, latitude, and longitude are common but fundamental meta-records in remote sensing image products. In this paper, it is shown that such a group of records provides important priors for cloud and snow detection in remote sensing imagery. The intuition comes from some common geographical knowledge, where many of them are important but are often overlooked. For example, it is generally known that snow is less likely to exist in low-latitude or low-altitude areas, and clouds in different geographic may have various visual appearances. Previous cloud and snow detection methods simply ignore the use of such information, and perform detection solely based on the image data (band reflectance). Due to the neglect of such priors, most of these methods are difficult to obtain satisfactory performance in complex scenarios (e.g., cloud-snow coexistence). In this paper, a novel neural network called “Geographic Information-driven Network (GeoInfoNet)” is proposed for cloud and snow detection. In addition to the use of the image data, the model integrates the geographic information at both training and detection phases. A “geographic information encoder” is specially designed, which encodes the altitude, latitude, and longitude of imagery to a set of auxiliary maps and then feeds them to the detection network. The proposed network can be trained in an end-to-end fashion with dense robust features extracted and fused. A new dataset called “Levir_CS” for cloud and snow detection is built, which contains 4,168 Gaofen-1 satellite images and corresponding geographical records, and is over 20× larger than other datasets in this field. On “Levir_CS”, experiments show that the method achieves 90.74% intersection over union of cloud and 78.26% intersection over union of snow. It outperforms other state of the art cloud and snow detection methods with a large margin. Feature visualizations also show that the method learns some important priors which is close to the common sense. The proposed dataset and the code of GeoInfoNet are available in https://github.com/permanentCH5/GeoInfoNet. Numéro de notice : A2021-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.01.023 Date de publication en ligne : 22/02/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.01.023 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97187
in ISPRS Journal of photogrammetry and remote sensing > vol 174 (April 2021) . - pp 87 - 104[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021041 SL Revue Centre de documentation Revues en salle Disponible 081-2021043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Graph convolutional networks by architecture search for PolSAR image classification / Hongying Liu in Remote sensing, vol 13 n° 7 (April-1 2021)
[article]
Titre : Graph convolutional networks by architecture search for PolSAR image classification Type de document : Article/Communication Auteurs : Hongying Liu, Auteur ; Derong Xu, Auteur ; Tianwen Zhu, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1404 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] bande L
[Termes IGN] classification par nuées dynamiques
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] classification semi-dirigée
[Termes IGN] échantillon
[Termes IGN] graphe
[Termes IGN] image AIRSAR
[Termes IGN] image radar moirée
[Termes IGN] noeud
[Termes IGN] polarimétrie radar
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) Classification of polarimetric synthetic aperture radar (PolSAR) images has achieved good results due to the excellent fitting ability of neural networks with a large number of training samples. However, the performance of most convolutional neural networks (CNNs) degrades dramatically when only a few labeled training samples are available. As one well-known class of semi-supervised learning methods, graph convolutional networks (GCNs) have gained much attention recently to address the classification problem with only a few labeled samples. As the number of layers grows in the network, the parameters dramatically increase. It is challenging to determine an optimal architecture manually. In this paper, we propose a neural architecture search method based GCN (ASGCN) for the classification of PolSAR images. We construct a novel graph whose nodes combines both the physical features and spatial relations between pixels or samples to represent the image. Then we build a new searching space whose components are empirically selected from some graph neural networks for architecture search and develop the differentiable architecture search method to construction our ASGCN. Moreover, to address the training of large-scale images, we present a new weighted mini-batch algorithm to reduce the computing memory consumption and ensure the balance of sample distribution, and also analyze and compare with other similar training strategies. Experiments on several real-world PolSAR datasets show that our method has improved the overall accuracy as much as 3.76% than state-of-the-art methods. Numéro de notice : A2021-350 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs13071404 Date de publication en ligne : 06/04/2021 En ligne : https://doi.org/10.3390/rs13071404 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97600
in Remote sensing > vol 13 n° 7 (April-1 2021) . - n° 1404[article]Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkScene classification of remotely sensed images via densely connected convolutional neural networks and an ensemble classifier / Qimin Cheng in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 4 (April 2021)PermalinkA graph-based semi-supervised approach to classification learning in digital geographies / Pengyuan Liu in Computers, Environment and Urban Systems, vol 86 (March 2021)PermalinkLearning from GPS trajectories of floating car for CNN-based urban road extraction with high-resolution satellite imagery / Ju Zhang in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkLightweight convolutional neural network-based pedestrian detection and re-identification in multiple scenarios / Xiao Ke in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkPan-sharpening via multiscale dynamic convolutional neural network / Jianwen Hu in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 3 (March 2021)PermalinkPBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkRecognition of varying size scene images using semantic analysis of deep activation maps / Shikha Gupta in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkRobust unsupervised small area change detection from SAR imagery using deep learning / Xinzheng Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)PermalinkToward a yearly country-scale CORINE land-cover map without using images: A map translation approach / Luc Baudoux in Remote sensing, Vol 13 n° 6 (March 2021)Permalink