参考FCN学习:Semantic Segmentation http://blog.youkuaiyun.com/yj3254/article/details/53033598,
在他的基础上增加了将结果图像index到colormap,最终显示彩色分割结果。
fcn的paper Fully Convolutional Networks for Semantic Segmentation
http://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Long_Fully_Convolutional_Networks_2015_CVPR_paper.html
1。用训练好的fcn网络来对任意一张图片进行分割
作者在github上开源了代码:Fully Convolutional Networks
git clone https://github.com/shelhamer/fcn.berkeleyvision.org.git
我们首先将项目克隆到本地
项目文件结构很清晰,如果想train自己的model,只需要修改一些文件路径设置即可,这里我们应用已经train好的model来测试一下自己的图片:
我们下载voc-fcn32s,voc-fcn16s以及voc-fcn8s的caffemodel(根据提供好的caffemodel-url),fcn-16s和fcn32s都是缺少deploy.prototxt的,我们根据train.prototxt稍加修改即可
略微修改infer.py,就可以测试我们自己的图片了
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import caffe
# load image, switch to BGR, subtract mean, and make dims C x H x W for Caffe
im = Image.open('data/pascal/VOCdevkit/VOC2012/JPEGImages/2007_000129.jpg')
in_ = np.array(im, dtype=np.float32)
in_ = in_[:,:,::-1]
in_ -= np.array((104.00698793,116.66876762,122.67891434))
in_ = in_.transpose((2,0,1))
# load net
net = caffe.Net('voc-fcn8s/deploy.prototxt', 'voc-fcn8s/fcn8s-heavy-pascal.caffemodel', caffe.TEST)
# shape for input (data blob is N x C x H x W), set data
net.blobs['data'].reshape(1, *in_.shape)
net.blobs['data'].data[...] = in_
# run net and take argmax for prediction
net.forward()
out = net.blobs['score'].data[0].argmax(axis=0)
plt.imshow(out,cmap='gray');plt.axis('off')
plt.savefig('test.png')
plt.show()
#draw voc color
RGBout=toRGBarray(out,21)
im = Image.fromarray(RGBout,mode='RGB')
im.save('dog_RGBout.png')
im.show()
接下来,只需要修改script中的图片路径和model的路径,就可以测试自己的图片了: