"authoritative, funny, and concise"
Steven Strogatz, Professor of Applied Mathematics, Cornell University.
The brain has always had a fundamental advantage over conventional computers: it can learn. However, a new generation of artificial intelligence algorithms, in the form of deep neural networks, is rapidly eliminating that advantage. Deep neural networks rely on adaptive algorithms to master a wide variety of tasks, including cancer diagnosis, object recognition, speech recognition, robotic control, chess, poker, backgammon and Go, at super-human levels of performance.
In this richly illustrated book, key neural network learning algorithms are explained informally first, followed by detailed mathematical analyses. Topics include both historically important neural networks (perceptrons, Hopfield nets, Boltzmann machines and backpropagation networks), and modern deep neural networks (variational autoencoders, convolutional networks, generative adversarial networks, and reinforcement learning using SARSA and Q-learning). Online computer programs, collated from open source repositories, give hands-on experience of neural networks, and PowerPoint slides provide support for teaching. Written in an informal style, with a comprehensive glossary, tutorial appendices (e.g. Bayes' theorem, maximum likelihood estimation), and a list of further readings, this is an ideal introduction to the algorithmic engines of modern artificial intelligence.
The Emperor's New AI? (Blog)
A Very Short History of Artificial Neural Networks (Blog)
Published 1st April 2019.
ISBN: 9780956372819 (Paperback).
ISBN: 9780956372826 (Hardback).