快速入门深度学习教程指南(附国外顶级名校课程视频资源)

这是一份深度学习的学习指南,旨在为初学者提供方向,包括预习、基础知识、实践项目和深入学习的路径。推荐了免费资源,如Python编程、数学基础、深度学习理论及实践课程,并提供了适合不同背景学习者的资源选择。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Deep_Learning_Route

From Self-Driving Cars to Alpha Go to Language Translation, Deep Learning seems to be everywhere nowadays. While the debate whether the hype is justified or not continues, Deep Learning has seen a rapid surge of interest across academia and industry over the past years.

With so much attention on the topic, more and more information has been recently published, from various MOOCs to books to YouTube Channels. With such a vast amount of resources at hand, there has never been a better time to learn Deep Learning.

Yet a side effect of such an influx of readily available material is choice overload. With thousands and thousands of resources, which are the ones worth looking at?

Inspired by Haseeb Qureshi's excellent Guide on Learning Blockchain Development, this is my try to share the resources that I have found throughout my journey. As I am planning on continuously updating this guide with updated and better information, I deemed this work in progress an incomplete guide.

As of now, all the resources in this guide are all free thanks to their authors.

The link to my original blog post can be found here.

Target Audience

  • Anybody who wants to dive deeper into the topic, seeks a career in this field or aspires to gain a theoretical understanding of Deep Learning

Goals of this Guide

  • Give an overview and a sense of direction within the ocean of resources
  • Give a clear and useful path towards learning Deep Learning Theory and Development
  • Give some practical tips along the way on how to maximize your learning experience

Outline

This guide is structured in the following way:

  • Phase 1 : Prerequisites
  • Phase 2 : Deep Learning Fundamentals
  • Phase 3 : Create Something
  • Phase 4 : Dive Deeper
  • Phase X : Keep Learning

How long your learning will take depends on numerous factors such as your dedication, background and time-commitment. And depending on your background and the things you want to learn, feel free to skip to any part of this guide. The linear progression I outline below is just the path I found to be useful for myself.

Phase 1: Pre-Requisites

Let me be clear from the beginning. The prerequisites you need depend on the objectives you intend to pursue. The foundations you need to conduct research in Deep Learning will differ from the things you need to become a Practitioner (both are of course not mutually exclusive).

Whether you don't have any knowledge of coding yet or you are already an expert in R, I would still recommend acquiring a working knowledge of Python as most of the resources on Deep Learning out there will require you to know Python.

Coding

While Codecademy is a great way to start coding from the beginning, MIT's 6.0001 lectures are an incredible introduction to the world of computer science. So, is CS50, Harvard's infamous CS intro course, but CS50 has less of a focus on Python. For people who prefer reading, the interactive online book How To Think Like A Computer Scientist is the way to go.

Math

If you simply want to apply Deep Learning techniques to a problem you face or gain a high-level understanding of Deep Learning, it is not necessary to know its mathematical underpinnings. But, in my experience, it has been significantly easier to understand and even more rewarding to use Deep Learning frameworks after getting familiar with its theoretical foundations. For such intentions, the basics of Calculus, Linear Algebra and Statistics are extremely useful. Fortunately, there are plenty of great Math resources online.

Here are the most important concepts you should know:

  1. Multivariable Calculus
  • Differentiation
  • Chain Rule
  • Partial Derivatives
  1. Linear Algebra
  • Definition of Vectors & Matrices
  • Matrix Operations and Transformations: Addition, Subtraction, Multiplication, Transpose, Inverse
  1. Statistics & Probability
  • Basic ideas like mean and standard deviation
  • Distributions
  • Sampling
  • Bayes Theorem

That being said, it is also possible to learn these concepts concurrently with Phase 2, looking up the Math whenever you need it.

If you want to dive right into the Matrix Calculus that is used in Deep Learning, take a look at The Matrix Calculus You Need For Deep Learning by Terence Parr and Jeremy Howard.
For lectures, Gilbert Strang’s recorded videos of his course MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (2018) are on YouTube and covers Linear Algebra, Probability and Optimization in the context of Deep Learning.

Calculus

For Calculus, I would choose between the MIT OCW Lectures, Prof. Leonard's Lectures and Khan Academy. The MIT Lectures are great for people who are comfortable with Math and seek a fast-paced yet rigorous introduction to Calculus. Prof. Leonard Lectures are perfect for anybody who is not too familiar with Math as he takes the time to explain everything in a very understandable manner. Lastly, I would recommend Khan Academy for people who just need a refresher or want to get an overview as fast as possible.

Linear Algebra

For Linear Algebra, I really enjoyed Professor Strang's Lecture Series and its accompanying book on Linear Algebra (MIT OCW). If you are interested in spending more time on Linear Algebra, I would recommend the MIT lectures, but if you just want to learn the basics quickly or get a refresher, Khan Academy is perfect for that.

For a more hands-on coding approach, check out Rachel Thomas' (from fast.ai) Computational Linear Algebra Course.

At last, this review from Stanford's CS229 course offers a nice reference you can always come back to.

For both Calculus and Linear Algebra, 3Blue1Brown's Essence of Calculus and Linear Algebra series are beautiful complementary materials to gain a more intuitive and visual understanding of the subject.

书的目录 Contents Website viii Acknowledgments ix Notation xiii 1 Introduction 1 1.1 Who Should Read This Book? . . . . . . . . . . . . . . . . . . . . 8 1.2 Historical Trends in Deep Learning . . . . . . . . . . . . . . . . . 12 I Applied Math and Machine Learning Basics 27 2 Linear Algebra 29 2.1 Scalars, Vectors, Matrices and Tensors . . . . . . . . . . . . . . . 29 2.2 Multiplying Matrices and Vectors . . . . . . . . . . . . . . . . . . 32 2.3 Identity and Inverse Matrices . . . . . . . . . . . . . . . . . . . . 34 2.4 Linear Dependence and Span . . . . . . . . . . . . . . . . . . . . 35 2.5 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6 Special Kinds of Matrices and Vectors . . . . . . . . . . . . . . . 38 2.7 Eigendecomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 42 2.9 The Moore-Penrose Pseudoinverse . . . . . . . . . . . . . . . . . . 43 2.10 The Trace Operator . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.11 The Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.12 Example: Principal Components Analysis . . . . . . . . . . . . . 45 3 Probability and Information Theory 51 3.1 Why Probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 i CONTENTS 3.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . 54 3.4 Marginal Probability . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . 57 3.6 The Chain Rule of Conditional Probabilities . . . . . . . . . . . . 57 3.7 Independence and Conditional Independence . . . . . . . . . . . . 58 3.8 Expectation, Variance and Covariance . . . . . . . . . . . . . . . 58 3.9 Common Probability Distributions . . . . . . . . . . . . . . . . . 60 3.10 Useful Properties of Common Functions . . . . . . . . . . . . . . 65 3.11 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.12 Technical Details of Continuous Variables . . . . . . . . . . . . . 69 3.13 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.14 Structured Probabilistic Models . . . . . . . . . . . . . . . . . . . 73 4 Numerical Computation 78 4.1 Overflow and Underflow . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Poor Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.3 Gradient-Based Optimization . . . . . . . . . . . . . . . . . . . . 80 4.4 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . 91 4.5 Example: Linear Least Squares . . . . . . . . . . . . . . . . . . . 94 5 Machine Learning Basics 96 5.1 Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 Capacity, Overfitting and Underfitting . . . . . . . . . . . . . . . 108 5.3 Hyperparameters and Validation Sets . . . . . . . . . . . . . . . . 118 5.4 Estimators, Bias and Variance . . . . . . . . . . . . . . . . . . . . 120 5.5 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . 129 5.6 Bayesian Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.7 Supervised Learning Algorithms . . . . . . . . . . . . . . . . . . . 137 5.8 Unsupervised Learning Algorithms . . . . . . . . . . . . . . . . . 142 5.9 Stochastic Gradient Descent . . . . . . . . . . . . . . . . . . . . . 149 5.10 Building a Machine Learning Algorithm . . . . . . . . . . . . . . 151 5.11 Challenges Motivating Deep Learning . . . . . . . . . . . . . . . . 152 II Deep Networks: Modern Practices 162 6 Deep Feedforward Networks 164 6.1 Example: Learning XOR . . . . . . . . . . . . . . . . . . . . . . . 167 6.2 Gradient-Based Learning . . . . . . . . . . . . . . . . . . . . . . . 172 ii CONTENTS 6.3 Hidden Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 6.4 Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . 193 6.5 Back-Propagation and Other Differentiation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 6.6 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 7 Regularization for Deep Learning 224 7.1 Parameter Norm Penalties . . . . . . . . . . . . . . . . . . . . . . 226 7.2 Norm Penalties as Constrained Optimization . . . . . . . . . . . . 233 7.3 Regularization and Under-Constrained Problems . . . . . . . . . 235 7.4 Dataset Augmentation . . . . . . . . . . . . . . . . . . . . . . . . 236 7.5 Noise Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 7.6 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . 240 7.7 Multitask Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 241 7.8 Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 7.9 Parameter Tying and Parameter Sharing . . . . . . . . . . . . . . 249 7.10 Sparse Representations . . . . . . . . . . . . . . . . . . . . . . . . 251 7.11 Bagging and Other Ensemble Methods . . . . . . . . . . . . . . . 253 7.12 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 7.13 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . 265 7.14 Tangent Distance, Tangent Prop and Manifold Tangent Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 8 Optimization for Training Deep Models 271 8.1 How Learning Differs from Pure Optimization . . . . . . . . . . . 272 8.2 Challenges in Neural Network Optimization . . . . . . . . . . . . 279 8.3 Basic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.4 Parameter Initialization Strategies . . . . . . . . . . . . . . . . . 296 8.5 Algorithms with Adaptive Learning Rates . . . . . . . . . . . . . 302 8.6 Approximate Second-Order Methods . . . . . . . . . . . . . . . . 307 8.7 Optimization Strategies and Meta-Algorithms . . . . . . . . . . . 313 9 Convolutional Networks 326 9.1 The Convolution Operation . . . . . . . . . . . . . . . . . . . . . 327 9.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 9.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 9.4 Convolution and Pooling as an Infinitely Strong Prior . . . . . . . 339 9.5 Variants of the Basic Convolution Function . . . . . . . . . . . . 342 9.6 Structured Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . 352 9.7 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 iii CONTENTS 9.8 Efficient Convolution Algorithms . . . . . . . . . . . . . . . . . . 356 9.9 Random or Unsupervised Features . . . . . . . . . . . . . . . . . 356 9.10 The Neuroscientific Basis for Convolutional Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 9.11 Convolutional Networks and the History of Deep Learning . . . . 365 10 Sequence Modeling: Recurrent and Recursive Nets 367 10.1 Unfolding Computational Graphs . . . . . . . . . . . . . . . . . . 369 10.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . 372 10.3 Bidirectional RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10.4 Encoder-Decoder Sequence-to-Sequence Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 10.5 Deep Recurrent Networks . . . . . . . . . . . . . . . . . . . . . . 392 10.6 Recursive Neural Networks . . . . . . . . . . . . . . . . . . . . . . 394 10.7 The Challenge of Long-Term Dependencies . . . . . . . . . . . . . 396 10.8 Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . 399 10.9 Leaky Units and Other Strategies for Multiple Time Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 10.10 The Long Short-Term Memory and Other Gated RNNs . . . . . . 404 10.11 Optimization for Long-Term Dependencies . . . . . . . . . . . . . 408 10.12 Explicit Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 11 Practical Methodology 416 11.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 417 11.2 Default Baseline Models . . . . . . . . . . . . . . . . . . . . . . . 420 11.3 Determining Whether to Gather More Data . . . . . . . . . . . . 421 11.4 Selecting Hyperparameters . . . . . . . . . . . . . . . . . . . . . . 422 11.5 Debugging Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 431 11.6 Example: Multi-Digit Number Recognition . . . . . . . . . . . . . 435 12 Applications 438 12.1 Large-Scale Deep Learning . . . . . . . . . . . . . . . . . . . . . . 438 12.2 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 12.3 Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 453 12.4 Natural Language Processing . . . . . . . . . . . . . . . . . . . . 456 12.5 Other Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 473 iv CONTENTS III Deep Learning Research 482 13 Linear Factor Models 485 13.1 Probabilistic PCA and Factor Analysis . . . . . . . . . . . . . . . 486 13.2 Independent Component Analysis (ICA) . . . . . . . . . . . . . . 487 13.3 Slow Feature Analysis . . . . . . . . . . . . . . . . . . . . . . . . 489 13.4 Sparse Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 13.5 Manifold Interpretation of PCA . . . . . . . . . . . . . . . . . . . 496 14 Autoencoders 499 14.1 Undercomplete Autoencoders . . . . . . . . . . . . . . . . . . . . 500 14.2 Regularized Autoencoders . . . . . . . . . . . . . . . . . . . . . . 501 14.3 Representational Power, Layer Size and Depth . . . . . . . . . . . 505 14.4 Stochastic Encoders and Decoders . . . . . . . . . . . . . . . . . . 506 14.5 Denoising Autoencoders . . . . . . . . . . . . . . . . . . . . . . . 507 14.6 Learning Manifolds with Autoencoders . . . . . . . . . . . . . . . 513 14.7 Contractive Autoencoders . . . . . . . . . . . . . . . . . . . . . . 518 14.8 Predictive Sparse Decomposition . . . . . . . . . . . . . . . . . . 521 14.9 Applications of Autoencoders . . . . . . . . . . . . . . . . . . . . 522 15 Representation Learning 524 15.1 Greedy Layer-Wise Unsupervised Pretraining . . . . . . . . . . . 526 15.2 Transfer Learning and Domain Adaptation . . . . . . . . . . . . . 534 15.3 Semi-Supervised Disentangling of Causal Factors . . . . . . . . . 539 15.4 Distributed Representation . . . . . . . . . . . . . . . . . . . . . . 544 15.5 Exponential Gains from Depth . . . . . . . . . . . . . . . . . . . 550 15.6 Providing Clues to Discover Underlying Causes . . . . . . . . . . 552 16 Structured Probabilistic Models for Deep Learning 555 16.1 The Challenge of Unstructured Modeling . . . . . . . . . . . . . . 556 16.2 Using Graphs to Describe Model Structure . . . . . . . . . . . . . 560 16.3 Sampling from Graphical Models . . . . . . . . . . . . . . . . . . 577 16.4 Advantages of Structured Modeling . . . . . . . . . . . . . . . . . 579 16.5 Learning about Dependencies . . . . . . . . . . . . . . . . . . . . 579 16.6 Inference and Approximate Inference . . . . . . . . . . . . . . . . 580 16.7 The Deep Learning Approach to Structured Probabilistic Models . . . . . . . . . . . . . . . . . . . . . . . . . 581 17 Monte Carlo Methods 587 17.1 Sampling and Monte Carlo Methods . . . . . . . . . . . . . . . . 587 v CONTENTS 17.2 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 589 17.3 Markov Chain Monte Carlo Methods . . . . . . . . . . . . . . . . 592 17.4 Gibbs Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596 17.5 The Challenge of Mixing between Separated Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 18 Confronting the Partition Function 603 18.1 The Log-Likelihood Gradient . . . . . . . . . . . . . . . . . . . . 604 18.2 Stochastic Maximum Likelihood and Contrastive Divergence . . . 605 18.3 Pseudolikelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 18.4 Score Matching and Ratio Matching . . . . . . . . . . . . . . . . 615 18.5 Denoising Score Matching . . . . . . . . . . . . . . . . . . . . . . 617 18.6 Noise-Contrastive Estimation . . . . . . . . . . . . . . . . . . . . 618 18.7 Estimating the Partition Function . . . . . . . . . . . . . . . . . . 621 19 Approximate Inference 629 19.1 Inference as Optimization . . . . . . . . . . . . . . . . . . . . . . 631 19.2 Expectation Maximization . . . . . . . . . . . . . . . . . . . . . . 632 19.3 MAP Inference and Sparse Coding . . . . . . . . . . . . . . . . . 633 19.4 Variational Inference and Learning . . . . . . . . . . . . . . . . . 636 19.5 Learned Approximate Inference . . . . . . . . . . . . . . . . . . . 648 20 Deep Generative Models 651 20.1 Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . . . . 651 20.2 Restricted Boltzmann Machines . . . . . . . . . . . . . . . . . . . 653 20.3 Deep Belief Networks . . . . . . . . . . . . . . . . . . . . . . . . . 657 20.4 Deep Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . 660 20.5 Boltzmann Machines for Real-Valued Data . . . . . . . . . . . . . 673 20.6 Convolutional Boltzmann Machines . . . . . . . . . . . . . . . . . 679 20.7 Boltzmann Machines for Structured or Sequential Outputs . . . . 681 20.8 Other Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . 683 20.9 Back-Propagation through Random Operations . . . . . . . . . . 684 20.10 Directed Generative Nets . . . . . . . . . . . . . . . . . . . . . . . 688 20.11 Drawing Samples from Autoencoders . . . . . . . . . . . . . . . . 707 20.12 Generative Stochastic Networks . . . . . . . . . . . . . . . . . . . 710 20.13 Other Generation Schemes . . . . . . . . . . . . . . . . . . . . . . 712 20.14 Evaluating Generative Models . . . . . . . . . . . . . . . . . . . . 713 20.15 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716 Bibliography 717 vi CONTENTS Index 774
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值