
AI
文章平均质量分 78
feitianlzk
这个作者很懒,什么都没留下…
展开
专栏收录文章
- 默认排序
- 最新发布
- 最早发布
- 最多阅读
- 最少阅读
-
The secret ingredients of word2vec 概要整理
源自Sebastian Ruder的博文The secret ingredients of word2vecQuestionsQ1. Are embeddings superior to distributional methods?With the right hyperparameters, no approach has a consistent advantage over转载 2018-01-20 23:08:26 · 227 阅读 · 0 评论 -
深度学习训练常用参数含义解释
momentum: μμ\mu,上一次更新的权重(保证更新方向不会和上次偏离过多) learning rate: αα\alpha,当前梯度的权重 weight decay: λλ\lambda 相当于L2 惩罚 refs: caffe,pytorch,stackoverflow Vt+1=μVt−α∇L(Wt)Vt+1=μVt−α∇L(Wt)V_{t+1} = \mu V_t - \a...原创 2018-08-31 22:01:52 · 1932 阅读 · 0 评论 -
pytorch 问题记录
Tensor & VariableWhat is the difference between Tensors and Variables in Pytorch?torch tensors are actually the data.variables wrap tensors, and construct a chain of operations between the t...原创 2018-05-24 17:04:55 · 2025 阅读 · 0 评论 -
pytorch: A 60 Minute blitz笔记
A replacement for NumPy to use the power of GPUsa deep learning research platform that provides maximum flexibility and speed1. Basic1.1 tensorsx = torch.empty(5, 3) # unitializedx = torch...原创 2018-05-14 14:29:12 · 562 阅读 · 0 评论 -
Deeplabv3+ 阅读笔记
Notes: 谷歌deeplabv3+的代码现在已经开源,详见deeplab(Github),还有一个使用的demo样例。0.spatial pyramid poolingprobing the incoming features with filters or pooling operations at multiple rates and multiple effecti...原创 2018-04-25 21:31:41 · 6912 阅读 · 0 评论 -
Transductive Unbiased Embedding for Zero-Shot Learning阅读笔记
Transductive Unbiased Embedding for Zero-Shot LearningSummaryPROubias term: 在Loss添加一个针对未知类的loss, 部分抑制了zero shot天生倾向于带label数据的问题巧妙的数据利用,虽然target dataset没有用label(图片文字对应关系),但是用了label的文字embedd...原创 2018-04-11 13:39:45 · 1427 阅读 · 2 评论 -
Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation阅读笔记
Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach阅读笔记x. QuestionsAE步骤, 怎么判断收敛不好。万一有的图片需要erase 3次,有的erase2两次会怎么样? 实验直接选了3次 PSL,为什么conv7的结...原创 2018-04-01 21:42:51 · 1561 阅读 · 0 评论 -
图像分割(不断更新)
弱监督语义分割what: segment image only use image-level labelchallenge: how to build relation between image labels and pixels 参考1. Features针对输入图片的salience map gradient based针对最后一层的class activa...原创 2018-04-01 21:41:11 · 335 阅读 · 0 评论 -
cs231n 12 Visualizing and Understanding
Visualizing and Understandingwhat’s going on in CNNFirst layer:weights, Filter, visual layers because when input similar to weights, result will be maximizedhigher layer filters: meaning less...原创 2018-04-01 13:15:30 · 288 阅读 · 0 评论 -
cs231n lecture11 segmentation
Segmentation, Localization, DetectionSemantic Segmentationlabel each pixel in the image with a category labelknow classesidea: sliding window inefficient, Not reusing shared features between...原创 2018-03-21 15:29:41 · 330 阅读 · 0 评论 -
cs231n lecture9 CNN Architectures
AlexNetCONV - MAXPOOL - NORM(not common) CONV - MAXPOOL - NORM CONV - CONV - CONV - MAXPOOL FC - FC - FC local response normalizationVGGdeeper (16-19layers)small filter (3x3 CONV stride...原创 2018-03-20 16:56:26 · 284 阅读 · 0 评论 -
cs231n lecture5 CNN
CNN 笔记Convolution LayerPooling LayerFully Connected Layer(FC layer)Useful NotesPreprocessingWeigth InitializationRegularizationLossclassificationAttribute classificationregression...原创 2018-03-20 14:47:45 · 262 阅读 · 0 评论 -
A 2017 Guide to Semantic Segmentation with Deep Learning 笔记
原文A 2017 Guide to Semantic Segmentation with Deep Learning0. Intro1. Problem1.1 before deep1.2 current1.3 postprocessing2. Models0. Intromainly use natural/real world im...原创 2018-03-23 17:26:37 · 322 阅读 · 0 评论 -
cs231n lecture13 Generative Models
Generative ModelsPixelCNN/RNNVAEGAN PixelRNN/CNN生成速度慢pro Can explicitly compute likelihood p(x)Explicit likelihood of training data gives good evaluation metricGood samplescon Sequ...原创 2018-03-23 16:26:07 · 440 阅读 · 0 评论 -
ReadingList
优先Few-shot Autoregressive Density Estimation: Towards Learning to Learn DistributionsMAMLSummary MAML: meta-learn an initial conditionLSTM optimization: meta-learn a good initial cond...原创 2018-04-15 18:36:34 · 674 阅读 · 0 评论 -
Learning to learn
字体数据集 imagenet几个应用few shot classification (image) + RL + 推荐(cold start)原创 2018-03-05 21:57:12 · 496 阅读 · 0 评论 -
Matching Networks for one Shot Learning 阅读笔记
Conclusionparametric vs metric karpathy很好的paper notesAdvantages使用加权平均(metric)方法去做few-shot,优点: 速度快,需要训练量小将对S的训练加入模型 pθ(y|x,S)pθ(y|x,S)p_\theta(y|x,S),具体做法是:分成task(或称为训练集)去训练, 每次选择一些类别L(label s...原创 2018-03-07 17:50:10 · 11124 阅读 · 1 评论 -
sigmoid和softmax的关系
softmax P(y=k)=exp(wkT⋅x)∑kexp(wkT⋅x)" role="presentation">P(y=k)=exp(wTk⋅x)∑kexp(wTk⋅x)P(y=k)=exp(wkT⋅x)∑kexp(wkT⋅x)P(y=k) = \frac{exp(\mathbf w_k^T \cdot \mathbf x)}{\sum_{k}e转载 2018-01-29 22:56:28 · 1093 阅读 · 0 评论 -
Context Encoding for Semantic Segmentation[CVPR2018]
FCN frameworkGood explanationglobal receptive fields: conv(no linearities) + downsamplespatial resolution lossencoder: dilated conv pro: expand receptive fieldcon: isolate pixels from cont...原创 2018-09-04 22:36:43 · 974 阅读 · 0 评论