cs231n_deeplearning
AChcxchCA
写博客主要是为了方便管理代码还有收藏一些有用的文章、链接、思路之类的,代码有哪里不对的还请各位网友慷慨指出。
展开
专栏收录文章
- 默认排序
- 最新发布
- 最早发布
- 最多阅读
- 最少阅读
-
cs231n_2017_batchnorm
batchnorm_forward:相关的计算求导,参阅:( step4累加单元的反向传播划个重点 )https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.htmldef batchnorm_forward(x, gamma, beta,...原创 2018-07-08 11:18:37 · 384 阅读 · 0 评论 -
cs231n_2017_max_pool_naive
max_pool_forward_naive:def max_pool_forward_naive(x, pool_param): """ A naive implementation of the forward pass for a max pooling layer. Inputs: - x: Input data, of shape (N, C, H, W...原创 2018-07-16 09:25:40 · 473 阅读 · 0 评论 -
cs231n_2017_conv_naive
conv_forward_naive:def conv_forward_naive(x, w, b, conv_param): """ A naive implementation of the forward pass for a convolutional layer. The input consists of N data points, each with C ...原创 2018-07-16 09:24:00 · 743 阅读 · 0 评论 -
cs231n_2017_spatial_batchnorm
spatial_batchnorm_forward:def spatial_batchnorm_forward(x, gamma, beta, bn_param): """ Computes the forward pass for spatial batch normalization. Inputs: - x: Input data of shape (N, ...原创 2018-07-16 09:22:19 · 1202 阅读 · 0 评论 -
cs231n_2017_Style_Transfer
import torchimport torch.nn as nnfrom torch.autograd import Variableimport torchvisionimport torchvision.transforms as Timport PILimport numpy as npfrom scipy.misc import imreadfrom collect...原创 2018-07-23 21:32:42 · 531 阅读 · 0 评论 -
cs231n_2017_pool_or_not
cs231n的官方课程笔记里,有提到如下内容:(译文来源:知乎https://zhuanlan.zhihu.com/p/22038289?refer=intelligentunit)不使用汇聚层:很多人不喜欢汇聚操作,认为可以不使用它。比如在Striving for Simplicity: The All Convolutional Net一文中,提出使用一种只有重复的卷积层组成的结构,抛弃汇...原创 2018-07-17 16:36:48 · 294 阅读 · 0 评论 -
cs231n_2017_visualize_filters
保存一段可视化滤波器/卷积核的代码:from cs231n.vis_utils import visualize_gridgrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))plt.imshow(grid.astype('uint8'))plt.axis('off')plt.gcf().set_size_inche...原创 2018-07-16 10:47:01 · 453 阅读 · 0 评论 -
cs231n_2017_FullyConnectedNet
这是一个全连接网络,结构为:{affine - [batch norm] - relu - [dropout]} x (L - 1) - affine - softmax(也是fc_net.py里面的一个网络,纯自己写)class FullyConnectedNet(object): """ Author::Chenx """ """ A fully-con...原创 2018-07-09 09:51:32 · 561 阅读 · 0 评论 -
cs231n_2017_TwoLayerNet
这是一个简单的两层网络:affine - relu - affine - softmax(关于网络结构的具体描述都在函数文档里了)from builtins import rangefrom builtins import objectimport numpy as npfrom cs231n.layers import *from cs231n.layer_utils import *...原创 2018-07-09 09:43:12 · 1058 阅读 · 0 评论 -
cs231n_2017_gradient_check
关于梯度检测的一些代码及使用示例:五种gradient_check:from __future__ import print_functionfrom builtins import rangeimport numpy as npfrom random import randrangedef eval_numerical_gradient(f, x, verbose=True, h=0...原创 2018-07-09 09:15:58 · 1058 阅读 · 0 评论 -
cs231n_2017_softmax_cross_entropy_loss
softmax_cross_entropy_loss:(交叉熵损失)def softmax_loss(x, y): """ Computes the loss and gradient for softmax classification. Inputs: - x: Input data, of shape (N, C) where x[i, j] is the ...原创 2018-07-08 16:13:44 · 502 阅读 · 0 评论 -
cs231n_2017_svm_hinge_loss
svm_hinge_loss: (折叶损失)def svm_loss(x, y): """ Computes the loss and gradient using for multiclass SVM classification. Inputs: - x: Input data, of shape (N, C) where x[i, j] is the sco...原创 2018-07-08 16:10:35 · 342 阅读 · 0 评论 -
cs231n_2017_solver
觉得 solver.py 拿来训练很好用,存一波:目录:solver.py , solver训练FullyConnectedNet , 外加一段可视化loss与accuracy函数的代码 。solver.py:from __future__ import print_function, divisionfrom builtins import rangefrom builtins ...原创 2018-07-08 16:06:16 · 709 阅读 · 0 评论 -
cs231n_2017_dropout_layer
dropout_forward:def dropout_forward(x, dropout_param): """ Performs the forward pass for (inverted) dropout. Inputs: - x: Input data, of any shape - dropout_param: A dictionary wi...原创 2018-07-08 15:42:34 · 264 阅读 · 0 评论 -
cs231n_2017_relu_layer
relu_forward:def relu_forward(x): """ Computes the forward pass for a layer of rectified linear units (ReLUs). Input: - x: Inputs, of any shape Returns a tuple of: - out: Out...原创 2018-07-08 15:39:40 · 484 阅读 · 0 评论 -
cs231n_2017_affine_layer
affine_forward:def affine_forward(x, w, b): """ Computes the forward pass for an affine (fully-connected) layer. The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N ...原创 2018-07-08 15:37:45 · 1212 阅读 · 0 评论 -
cs231n_2017_update_rules
课程笔记及其翻译的网址:https://zhuanlan.zhihu.com/p/21930884?refer=intelligentunit一、SGD:def sgd(w, dw, config=None): """ Performs vanilla stochastic gradient descent. config format: - learning_r...原创 2018-07-08 15:33:12 · 338 阅读 · 0 评论
分享