Descripteur
Termes descripteurs IGN > informatique > intelligence artificielle > réseau neuronal artificiel > réseau neuronal convolutif
réseau neuronal convolutif |



Etendre la recherche sur niveau(x) vers le bas
Learning sequential slice representation with an attention-embedding network for 3D shape recognition and retrieval in MLS point clouds / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
![]()
[article]
Titre : Learning sequential slice representation with an attention-embedding network for 3D shape recognition and retrieval in MLS point clouds Type de document : Article/Communication Auteurs : Zhipeng Luo, Auteur ; Di Liu, Auteur ; Jonathan Li, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 147 - 163 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] balayage laser
[Termes descripteurs IGN] données laser
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] graphe
[Termes descripteurs IGN] reconnaissance de formes
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] réseau routier
[Termes descripteurs IGN] semis de points
[Termes descripteurs IGN] télémétrie laser mobileRésumé : (Auteur) The representation of 3D data is the key issue for shape analysis. However, most of the existing representations suffer from high computational cost and structure information loss. This paper presents a novel sequential slice representation with an attention-embedding network, named RSSNet, for 3D point cloud recognition and retrieval in road environments. RSSNet has two main branches. Firstly, a sequential slice module is designed to map disordered 3D point clouds to ordered sequence of shallow feature vectors. A gated recurrent unit (GRU) module is applied to encode the spatial and content information of these sequential vectors. The second branch consists of a key-point based graph convolution network (GCN) with an embedding attention strategy to fuse the sequential and global features to refine the structure discriminability. Three datasets were used to evaluate the proposed method, one acquired by our mobile laser scanning (MLS) system and two public datasets (KITTI and Sydney Urban Objects). Experimental results indicated that the proposed method achieved better performance than recognition and retrieval state-of-the-art methods. RSSNet provided recognition rates of 98.08%, 95.77% and 70.83% for the above three datasets, respectively. For the retrieval task, RSSNet obtained excellent mAP values of 95.56%, 87.16% and 69.99% on three datasets, respectively. Numéro de notice : A2020-064 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.01.003 date de publication en ligne : 22/01/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.01.003 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94582
in ISPRS Journal of photogrammetry and remote sensing > vol 161 (March 2020) . - pp 147 - 163[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020031 SL Revue Centre de documentation Revues en salle Disponible 081-2020033 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery / Lucas Prado Osco in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
![]()
[article]
Titre : A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery Type de document : Article/Communication Auteurs : Lucas Prado Osco, Auteur ; Mauro Dos Santos de Arruda, Auteur ; José Marcato Junior, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 97 - 106 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] Brésil
[Termes descripteurs IGN] carte de confiance
[Termes descripteurs IGN] citrus (genre)
[Termes descripteurs IGN] détection d'arbres
[Termes descripteurs IGN] géolocalisation
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] image multibande
[Termes descripteurs IGN] inventaire de la végétation
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] vergerRésumé : (Auteur) Visual inspection has been a common practice to determine the number of plants in orchards, which is a labor-intensive and time-consuming task. Deep learning algorithms have demonstrated great potential for counting plants on unmanned aerial vehicle (UAV)-borne sensor imagery. This paper presents a convolutional neural network (CNN) approach to address the challenge of estimating the number of citrus trees in highly dense orchards from UAV multispectral images. The method estimates a dense map with the confidence that a plant occurs in each pixel. A flight was conducted over an orchard of Valencia-orange trees planted in linear fashion, using a multispectral camera with four bands in green, red, red-edge and near-infrared. The approach was assessed considering the individual bands and their combinations. A total of 37,353 trees were adopted in point feature to evaluate the method. A variation of σ (0.5; 1.0 and 1.5) was used to generate different ground truth confidence maps. Different stages (T) were also used to refine the confidence map predicted. To evaluate the robustness of our method, we compared it with two state-of-the-art object detection CNN methods (Faster R-CNN and RetinaNet). The results show better performance with the combination of green, red and near-infrared bands, achieving a Mean Absolute Error (MAE), Mean Square Error (MSE), R2 and Normalized Root-Mean-Squared Error (NRMSE) of 2.28, 9.82, 0.96 and 0.05, respectively. This band combination, when adopting σ = 1 and a stage (T = 8), resulted in an R2, MAE, Precision, Recall and F1 of 0.97, 2.05, 0.95, 0.96 and 0.95, respectively. Our method outperforms significantly object detection methods for counting and geolocation. It was concluded that our CNN approach developed to estimate the number and geolocation of citrus trees in high-density orchards is satisfactory and is an effective strategy to replace the traditional visual inspection method to determine the number of plants in orchards trees. Numéro de notice : A2020-045 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.12.010 date de publication en ligne : 18/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.12.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94525
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 97 - 106[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 SL Revue Centre de documentation Revues en salle Disponible 081-2020023 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Transferring deep learning models for cloud detection between Landsat-8 and Proba-V / Gonzalo Mateo-García in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)
![]()
[article]
Titre : Transferring deep learning models for cloud detection between Landsat-8 and Proba-V Type de document : Article/Communication Auteurs : Gonzalo Mateo-García, Auteur ; Valero Laparra, Auteur ; Dan López-Puigdollers, Auteur ; Luis Gómez-Chova, Auteur Année de publication : 2020 Article en page(s) : pp 1 - 17 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage par transformation
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] conversion de données
[Termes descripteurs IGN] détection des nuages
[Termes descripteurs IGN] échantillonnage de données
[Termes descripteurs IGN] image Landsat-8
[Termes descripteurs IGN] image multibande
[Termes descripteurs IGN] image PROBA
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] masque
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] seuillage de pointsRésumé : (Auteur) Accurate cloud detection algorithms are mandatory to analyze the large streams of data coming from the different optical Earth observation satellites. Deep learning (DL) based cloud detection schemes provide very accurate cloud detection models. However, training these models for a given sensor requires large datasets of manually labeled samples, which are very costly or even impossible to create when the satellite has not been launched yet. In this work, we present an approach that exploits manually labeled datasets from one satellite to train deep learning models for cloud detection that can be applied (or transferred) to other satellites. We take into account the physical properties of the acquired signals and propose a simple transfer learning approach using Landsat-8 and Proba-V sensors, whose images have different but similar spatial and spectral characteristics. Three types of experiments are conducted to demonstrate that transfer learning can work in both directions: (a) from Landsat-8 to Proba-V, where we show that models trained only with Landsat-8 data produce cloud masks 5 points more accurate than the current operational Proba-V cloud masking method, (b) from Proba-V to Landsat-8, where models that use only Proba-V data for training have an accuracy similar to the operational FMask in the publicly available Biome dataset (87.79–89.77% vs 88.48%), and (c) jointly from Proba-V and Landsat-8 to Proba-V, where we demonstrate that using jointly both data sources the accuracy increases 1–10 points when few Proba-V labeled images are available. These results highlight that, taking advantage of existing publicly available cloud masking labeled datasets, we can create accurate deep learning based cloud detection models for new satellites, but without the burden of collecting and labeling a large dataset of images. Numéro de notice : A2020-043 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.11.024 date de publication en ligne : 10/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.11.024 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94522
in ISPRS Journal of photogrammetry and remote sensing > vol 160 (February 2020) . - pp 1 - 17[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020021 SL Revue Centre de documentation Revues en salle Disponible 081-2020023 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt
Titre : Remote sensing based building extraction Type de document : Monographie Auteurs : Mohammad Awrangjeb, Auteur ; Xiangyun Hu, Auteur ; Bisheng Yang, Auteur ; Jiaojiao Tian, Auteur Editeur : Bâle [Suisse] : Multidisciplinary Digital Publishing Institute MDPI Année de publication : 2020 Importance : 442 p. ISBN/ISSN/EAN : 978-3-03928-383-5 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] détection du bâti
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] image à haute résolution
[Termes descripteurs IGN] reconstruction 3D du bâti
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] semis de pointsRésumé : (Editeur) Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Even though significant research has been carried out for more than two decades, the success of automatic building extraction and modeling is still largely impeded by scene complexity, incomplete cue extraction, and sensor dependency of data. Most recently, deep neural networks (DNN) have been widely applied for high classification accuracy in various areas including land-cover and land-use classification. Therefore, intelligent and innovative algorithms are needed for the success of automatic building extraction and modeling. This Special Issue focuses on newly developed methods for classification and feature extraction from remote sensing data for automatic building extraction and 3D. Numéro de notice : 26305 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Monographie DOI : 10.3390/books978-3-03928-383-5 date de publication en ligne : 07/04/2020 En ligne : https://doi.org/10.3390/books978-3-03928-383-5 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95064
Titre : SpiNNaker: A spiking neural network architecture Type de document : Monographie Auteurs : Steve Furber, Editeur scientifique ; Petrut Bogdan, Editeur scientifique Editeur : Boston, Delft : Now publishers Année de publication : 2020 Importance : 352 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-1-68083-652-3 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] Cerveau
[Termes descripteurs IGN] outil logiciel
[Termes descripteurs IGN] programmation stochastique
[Termes descripteurs IGN] puce
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] système de traitement de l'information
[Termes descripteurs IGN] vision par ordinateurRésumé : (éditeur) 20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come. Note de contenu : 1- Origins
2- The SpiNNaker Chip
3- Building SpiNNaker Machines
4- Stacks of Software Stacks
5- Applications - Doing Stuff on the Machine
6- From Activations to Spikes
7- Learning in Neural Networks
8- Creating the FutureNuméro de notice : 25978 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE Nature : Monographie DOI : 10.1561/9781680836523 En ligne : http://dx.doi.org/10.1561/9781680836523 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96705 Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia / Neil Flood in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)
PermalinkAddressing overfitting on point cloud classification using Atrous XCRF / Hasan Asy’ari Arief in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)
PermalinkLearning and adapting robust features for satellite image segmentation on heterogeneous data sets / Sina Ghassemi in IEEE Transactions on geoscience and remote sensing, vol 57 n° 9 (September 2019)
PermalinkLocal climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network / Chunping Qiu in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
PermalinkCNN-based dense image matching for aerial remote sensing images / Shunping Ji in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
PermalinkAutomatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
PermalinkBIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
PermalinkPermalinkLearning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
PermalinkVehicle detection in aerial images / Michael Ying Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 4 (avril 2019)
Permalink