CS231n
Lecture 12: Visualizing and Understanding
Visualize
- Filters of first layer
- feature sof last layer: t-SNE
- Visualizing Activations
- Occlusion Experiments
- Saliency Maps: 梯度反传到输入
Intermediate features via guided backprop:
grad *= relu(y)
- Pick a single intermediate neuron
- Compute gradient of neuron value with respect to image pixels
Images come out nicer if you only backprop positive gradients through each ReLU
Find the part of an image that a neuron responds to
对象是输入图像,需要给定Visualizing CNN features: Gradient Ascent
Generate a synthetic image that maximally activates a neuron, 对象是模型本身,无需给定输入图像
I∗=argmaxIf(I)+R(I)⇒argmaxISc(I)−λ∥I∥22I∗=argmaxIf(I)+R(I)⇒argmaxISc(I)−λ‖I‖22- Initialize image to zeros
- Forward image to compute current scores
- Backprop to get gradient of neuron value with respect to image pixels
- Make a small update to the image
Adversarial Examples, 可以给定输入图像
xadv=x+argminδ∥δ∥s.t.Pred(xadv)≠Pred(x)xadv=x+argminδ‖δ‖s.t.Pred(xadv)≠Pred(x)
DeepDream, 可以给定输入图像- Forward: compute activations at chosen layer
- Set gradient of chosen layer equal to its activation
- Backward: Compute gradient on image
- Update image
- Feature Inversion, 由特征找回原图像
Given a CNN feature vector for an image, find a new image that:
- Matches the given feature vector
- “looks natural” (image prior regularization)
x∗=argminxl(Φ(x),Φ0)+λR(x)=argminx∥Φ(x)−Φ0∥2+λ∫∥∇f∥βdrx∗=argminxl(Φ(x),Φ0)+λR(x)=argminx‖Φ(x)−Φ0‖2+λ∫‖∇f‖βdr
- Texture Synthesis: Gram matrix ⇒⇒ Neural Style Transfer
原有nueral style方法对于一张内容图像和风格图像迭代生成太慢⇒⇒Fast Style Transfer: 训练一个专门的模型生成某一特定的风格图像(训练网络使生成的图像满足传统neural style中的目标,或者对图像进行多尺度融合直接生成相应风格的图像)⇒⇒单个模型生成多种风格(学习每一种风格的scale和shift,选定一组scale和shift后用同一个模型生成相应风格的图像)