Description
This textbook presents a concise, accessible and engaging first introduction to deep learning, offering quite a lot of connectionist models which represent the current cutting-edge. The text explores the most well liked algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step by step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. A large number of examples in working Python code are provided all over the book, and the code is also supplied one after the other at an accompanying website.
Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can also be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas in the back of language processing with deep learning; presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism.
This clearly written and lively primer on deep learning is very important reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology.