- 博客(37)
- 收藏
- 关注
转载 Inception Score&Mode Score
Inception ScoreMode ScoreReference[1]Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li. Mode Regularized Generative Adversarial Networks. In ICLR2017. [2]Improved Techniques for Trainin
2017-05-19 21:00:09
7042
转载 Wasserstein distance&测度
Wasserstein distance维基百科https://www.zhihu.com/question/41752299/answer/147394973测度https://www.zhihu.com/question/28367115
2017-05-15 20:20:25
1523
转载 度量空间,赋范空间,內积空间,希尔伯特空间
上海交通大学公开课:数学之旅——3函数空间距离范数内积在前半段各种空间在公开课后半段基本上把概念讲的很清楚了
2017-05-15 19:49:53
1907
转载 再生核Hilbert空间(RKHS)&SVM
Reproducing Kernel Hilbert Space(RKHS)http://www.cnblogs.com/murongxixi/p/3480851.htmlhttp://blog.pluskid.org/?page_id=683
2017-05-14 13:15:43
1277
转载 Deconv
https://github.com/vdumoulin/conv_arithmetichttps://www.zhihu.com/question/43609045/answer/98699795https://www.zhihu.com/question/43609045/answer/132235276https://www.zhihu.com/question/43609045/answer
2017-04-23 19:34:18
581
原创 20170402#cs231n#13.Others
Segmentation 分割上图中没有对四只牛进行具体的分割,因为四只牛混在一块了,这是 Semantic Segmentation Instance Segmentation 实例分割又称为实时检测与分割 需要把每个pixel分到具体的类上面去实际中语义分割和实例分割是分开进行的Semantic Segmentation 语义分割multi-scaleRefinementUpsampling
2017-04-02 09:34:13
1003
原创 20170328#cs231n#12.CNNs in Practice
得到更多数据在小数据集用数据增强和迁移学习是很合适的Data augmentation数据增强这是一种很容易实现的方法。 在图片输入CNN之前先做一些变化,应该是改变了每一个pixel,但是图片的label保持不变。可能减小过拟合的发生。Horizontal flips 水平翻转Random crops/scales 随机裁剪/缩放图片(在test的时候要对test set进行裁剪缩放水平翻转
2017-03-28 21:29:55
681
原创 20170326#cs231n#11.Recurrent Neural Networks 循环神经网络RNN
主要参考资料简书——[译] 理解 LSTM 网络 csdn——循环神经网络(RNN, Recurrent Neural Networks)介绍 cs231n RNN PPTRNNkarpathy/min-char-rnn.py https://gist.github.com/karpathy/d4dee566867f8291f086 图片转文字描述CNN→RNN把CNN最后的分类器去掉,然后
2017-03-26 14:48:44
2121
原创 20170325#cs231n#10.Understanding and Visualizing Convolutional Neural Networks
Visualize patches that maximally activate neurons这个意思是是把数据输入某一层中,然后看数据的哪一部分最能激活这层的神经元Visualize the filters/kernels (raw weights) 但对高层的weight可视化的意义就不是特别大了Visualizing the representationt-SNE visualizati
2017-03-25 15:32:09
1841
原创 20170324#cs231n#9.ConvNets for spatial localization & Object detection
Computer Vision Tasks
2017-03-24 11:08:25
929
原创 #cs231n#Assignment2:ConvolutionalNetworks.ipynb
根据自己的理解和参考资料实现了一下 ConvolutionalNetworks.ipynbConvolutional NetworksSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network archit
2017-03-20 09:32:34
2114
原创 #cs231n#Assignment2:Dropout.ipynb
根据自己的理解和参考资料实现了一下 Dropout.ipynbDropoutDropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement
2017-03-19 10:25:17
1555
原创 #cs231n#Assignment2:BatchNormalization.ipynb
根据自己的理解和参考资料实现了一下 BatchNormalization.ipynbBatch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam.
2017-03-19 10:24:06
2132
原创 #cs231n#Assignment2:FullyConnectedNets.ipynb
根据自己的理解和参考资料实现了一下 FullyConnectedNets.ipynbFully-Connected Neural NetsIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but
2017-03-19 10:20:22
4356
原创 20170316#cs231n#8.Convolutional Neural Networks
Convolutional Neural Networks (CNNs / ConvNets)
2017-03-16 19:41:41
2717
原创 20170307#cs231n#7.Neural Networks Part 3: Learning and Evaluation
Gradient Checks/momentum/AdaGrad/RMSProp/Adam/...
2017-03-07 14:22:46
1019
原创 20170304#cs231n#6.Neural Networks Part 2: Setting up the Data and the Loss
Data Preprocessing 数据预处理Mean subtraction 零均值化:对每一个特征减去均值 X -= np.mean(X, axis = 0)Normalization 归一化:把数据的所有维度归一化,使其数值范围近似相等,归一化操作只有在确定不同输入特征有不同数值范围(或单位)才有意义,但是图像处理中由于范围为[0,255]所以这个数据预处理的步骤意义不是特别大。归一化大
2017-03-04 18:12:30
816
原创 20170228#cs231n#5.Neural Networks Part 1神经网络1 /Assignment1-NeuralNetwork
Neural Networks Part 1: Setting up the Architecture这个是NeuralNetwork的计算公式s=W2max(0,W1x)s = W_2 \max(0, W_1 x) xx [3072×1] W1W_1[100×3072] W2W_2 [10×100] max函数是一个非线性函数,与我们之前的不同,正是因为这个改变使得其与线性函数不同。 参数W
2017-02-28 10:28:13
694
原创 20170226#cs231n#4.Backpropagation反向传播
Backpropagation反向传播反向传播是一种利用链式法则chain rule去递归计算表达式梯度的方法 训练集的xix_i的梯度仍然是有用的,例如在将神经网络所做的事可视化以便于理解的时候 参考资料:CS231n课程笔记翻译:反向传播笔记-知乎智能单元-杜客ChainRule链式法则∂f∂x=∂f∂q∂q∂x\frac{\partial f}{\partial x}=\frac{\par
2017-02-26 23:12:10
789
原创 20170225#cs231n#3.最优化问题
Optimization:Stochastic Gradient DescentOptimization是寻找一个WW使得LossFunction最小化的过程SVMcostFunction就是一个convex function(凸函数),然后这就涉及到了 ConvexOptimization(凸优化) 但是神经网络的代价函数就是non-convex的了然后会有很多lossfunction是Non
2017-02-25 16:56:04
652
原创 #cs231n#相关资源
CS231n Notes CS231n教学计划 网易公开课CS231n—深度学习与计算机视觉 xieyi4650的博客CS231n专题
2017-02-25 10:31:02
399
原创 20170202 Coursera Stanford-MachineLearning/Week9
Week9:Anomaly detection/Recommender Systems 异常检测/推荐系统Anomaly detection 异常检测 训练样本在中心的概率最大所以test如果在中心表明正常Gaussian Distribution 高斯分布(正态分布)x∼N(μ,σ2)P(x;μ,σ2)=12π‾‾‾√σe−(x−μ)22σ2x \sim N(\mu,\sigma^{2}) \\
2017-02-02 20:52:28
1139
原创 20170129 Coursera Stanford-MachineLearning/Week8
Week8:Unsupervised Learning非监督学习Supervised learning & Unsupervised learning 监督学习与非监督学习Supervised learning 监督学习:给出y值,已经为数据提供label了Unsupervised learning 非监督学习:从未标记的数据中学习,要求算法替我们分析出数据的结构Clustering 聚类算法 聚
2017-01-29 18:06:14
938
原创 20170125 Coursera Stanford-MachineLearning/Week4-5
Week4/5:Neural Networks: Representation/Learning
2017-01-25 11:58:01
522
原创 20170123 Coursera Stanford-MachineLearning/Week7
Week7:Support Vector Machine(SVM) 支持向量机SVM又称为大间距分类器(Large Margin Classifier)
2017-01-23 17:22:11
788
原创 20161206#cs231n#2.线性分类器 Assignment1--SVM&Softmax
Linear classification: Support Vector Machine, SoftmaxLinear Classifier线性分类器
2016-12-08 12:50:28
640
原创 20161202 Coursera Stanford-MachineLearning/Week10-11
Week10:Large Scale Machine LearningStochastic Gradient Descent
2016-12-05 00:10:57
842
原创 20161129 Coursera Stanford-MachineLearning/Week6
Week6:Advice for Applying Machine Learning&Machine Learning System Design
2016-11-29 21:45:09
521
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人