- 博客(19)
- 收藏
- 关注
原创 奇异值分解 Singular Value Decomposition
奇异值分解 Singular Value Decomposition现在看前一段时间写的博客觉得好尬,有小错误不说,有些地方根本理解不到位,有机会再改吧(下次一定.jpg)奇异值分解的定义与性质定义与定理A=UΣVTA = U\Sigma V^TA=UΣVTA是任意矩阵(m×nm\times nm×n),连方阵都不要求 。UUU是正交矩阵orthogonal matrix(m×mm\times mm×m),列向量被称为左奇异向量VVV是正交矩阵orthogonal matrix(n×nn
2020-05-20 10:18:26
493
原创 K-Nearest Neighbor K最近邻
K-Nearest Neighbor K最近邻KNN find a k nearest examples(neighbour) and vote for its prediction.In the picture, we want to classify green circle. If k=3k=3k=3, green circle belongs to the red triangle ...
2020-04-21 15:23:43
229
原创 Activation Function 激活函数
Activation Function 激活函数IntroIn parctice, we hardly ever use step function as activation function ϕ(z)\phi(z)ϕ(z)Activation function can be divided into two types: saturated activation function and...
2020-04-21 00:22:25
468
原创 Neural Network Introduction 神经网络总论
Introduction of Neural Network 神经网络总论History of NNMaybe I will write this part later.PerceptronSkip.NeuralIf you don’t know basic neural model, please search it.You can add a neural z0(l)/x0z_...
2020-04-20 22:57:28
278
原创 Back Propagation Algorithm 后向传播算法
Back Propagation Algorithm 后向传播算法本文所有图片来自浙大课程截图IntroNN need to compute a lot of partial derivative. We can use the inner relationships to simplify computing.Basic equations:{z1=ω11x1+ω21x2+b1z2=ω...
2020-04-20 22:56:52
283
原创 AdaBoost Algorithm 自适应提升
AdaBoost Algorothm 自适应提升IntroAdaboost is the abbreviation of adaptive boosting.Adaboost combines plenty of weak classification models to form a strong classification.Adaboost usually use one layer...
2020-04-19 17:37:54
274
原创 Principle Component Analysis 主成分分析
Principle Component Analysis 主成分分析IntroPCA is one of the most important and widely used dimension reduction algorithm. Other dimension reduction algorithm include LDA, LLE and Laplacian Eigenmaps....
2020-04-19 15:57:22
462
原创 EM Algorithm EM算法
EM Algorithm EM算法EM algorithm is a iterative optimazation algorithm. EM is the abbreviation of expectation maximization. EM do not apply for all the optimization problems.Remember! EM always conver...
2020-04-15 22:04:28
440
原创 Probability Classification Model Introduction 概率分类模型总论
Probability Classification Model内容来自于CS229,浙江大学机器学习听课笔记以及百度百科,blog补充基本问题:假设有状态w1,w2w_1,w_2w1,w2,样本x∈{w1,w2}x \in\{w_1,w_2\}x∈{w1,w2},求P(w1∣x),P(w2∣x)P(w_1|x),P(w_2|x)P(w1∣x),P(w2∣x)分类问题:若P(w...
2020-04-15 21:08:11
272
原创 Gaussian Mixture Model 高斯混合模型 GMM
Gaussian Mixture Model 高斯混合模型 GMMGaussian mixture model is a combine of multiple Gaussian models. These Gaussian models mixture according to ‘weight’ π\piπ. The picture is a mixture of two models....
2020-04-15 21:07:16
505
原创 Gaussian Discriminative Analysis 高斯判别分析 GDA
Gaussian Discriminative Analysis 高斯判别分析 GDAMultidimensional Gaussian Modelz∼N(μ⃗,Σ)z \sim N(\vec\mu,\Sigma)z∼N(μ,Σ)z∈Rn,μ⃗∈Rn,Σ∈Rn∗nz \in R^n,\vec\mu \in R^n, \Sigma \in R^{n*n}z∈Rn,μ∈Rn,Σ∈Rn∗n...
2020-04-15 17:14:40
473
原创 Naive Bayes 朴素贝叶斯
Naive Bayesian 朴素贝叶斯内容来自于CS229,浙江大学机器学习听课笔记以及百度百科,blog补充We would use an example to show this algorithm, this examples is still using in practice now.Spam email classifierXXX is 1/0 vector corresp...
2020-04-15 15:42:22
138
原创 Logistic Regression 逻辑回归 + Newton's Method
Logistic Regression 逻辑回归内容来自于CS229听课笔记以及百度百科,blog补充Logistic regression is a classification model, but you can use it to solve regression problems if you want to.WARNING: do not use linear regressio...
2020-04-14 21:34:46
294
原创 Linear Regression 线性回归
Linear Regression 线性回归内容为CS229听课笔记Introhθ(x)=θ0+θ1x1+θ2x2h_{\theta}(x) = \theta_0 + \theta_1x_1+\theta_2x_2hθ(x)=θ0+θ1x1+θ2x2define x0=1x_0=1x0=1, so θ0\theta_0θ0 is not alone, hθ(x)=∑i=02...
2020-04-14 19:29:04
217
原创 Support Vector Machine 支持向量机
Support Vector Machine 支持向量机Vapnik(前苏联人)适合样本数较小时的预测由听浙江大学的研究生课程整理得来,链接0. No Free Lunch Therome(就一句话,放到这里了)如果不对特征空间有先验假设,所有算法平均表现一致。我们认为:特征差距小的样本更有可能是一类,所以机器学习不是白学di!SVM简介图片来自百度百科可以证明,对于一个 ...
2020-03-21 21:35:32
250
原创 Prime Problem and Dual Problem
Prime Promblem and Dual Problemrecommend textbook:Convex Optimization Stepthen BoydNonlinear ProgrammingPrime Problem{minimize f(ω)s.t.{gi(ω)≤(i=1∼K)hi(ω)=0(i=1∼N)\left\{\begin{array}{l}...
2020-03-19 23:38:58
339
原创 IV. Feasibility of Learning
IV. Feasibility of Learning1. Learning is Impossible?Lin provides two examples to show learning seems to be impossible.2. Probability to the RescueHoeffding’s Inequality:μ\muμ is the actual freq...
2020-03-17 21:45:07
190
原创 III. Types of Learning
III. Types of Learning1. Learning with Different Output Space yyyClassification , binary classifiction, multiclass classificationRegressionStructured, output a structure including videos, images, ...
2020-03-17 19:17:34
146
原创 II. Perceptron Learning Algorithm PLA
Learning to Answer Yes/No1. Perceptron Hypothesis SetPerceptron 感知机 ⇔\Leftrightarrow⇔ linear(binary) classifierh(x)=sign((∑i=1d)−threshold)h(x)=sign(∑i=1dwixi+(−thresholdw0)⋅(+1x0))h(x)=sign(∑i=0d...
2020-03-16 18:33:28
151
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人