Détail de l'auteur
Auteur Sandro Skansi |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Titre : Introduction to Deep Learning : From Logical Calculus to Artificial Intelligence Type de document : Monographie Auteurs : Sandro Skansi, Auteur Editeur : Springer Nature Année de publication : 2018 Importance : 196 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-3-319-73004-2 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage profond
[Termes IGN] classification
[Termes IGN] codage
[Termes IGN] estimation par noyau
[Termes IGN] matrice de covariance
[Termes IGN] Perceptron multicouche
[Termes IGN] Python (langage de programmation)
[Termes IGN] régression logistique
[Termes IGN] réseau neuronal artificiel
[Termes IGN] réseau neuronal convolutif
[Termes IGN] sciences cognitives
[Termes IGN] théorie des probabilitésRésumé : (auteur) This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website.
Topics and features:
Introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning
Discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network
Examines convolutional neural networks, and the recurrent connections to a feed-forward neural network
Describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning
Presents a brief history of artificial intelligence and neural networks, and reviews interesting
open research problems in deep learning and connectionism
This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology.Note de contenu : 1- From Logic to Cognitive Science
2- Mathematical and Computational Prerequisites
3- Machine Learning Basics
4- Feedforward Neural Networks
5- Modifications and Extensions to a Feed-Forward Neural Network
6- Convolutional Neural Networks
7- Recurrent Neural Networks
8- Autoencoders
9- Neural Language Models
10- An Overview of Different Neural Network Architectures
11- ConclusionNuméro de notice : 25787 Affiliation des auteurs : non IGN Thématique : INFORMATIQUE/MATHEMATIQUE Nature : Monographie En ligne : https://doi.org/10.1007/978-3-319-73004-2 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94990