Image Classification
Introduction
这个部分介绍了图片识别问题。图片识别问题是将一张输入图片标识上固定类别标签的过程。
举的是一个识别喵星人图片的例子:
总的来说,计算机来分类图片的模型会遇到以下几个问题:
这里用于解决问题的办法并不是一个明确的算法,比如排序算法,而是用大量的training dataset来训练计算机识别物体,这种过程叫做data-driven approach。
比如一个数据集如下:
图像分类的任务是用一些像素点来代替一张图片并且标识出其标签来。整个流程可以归纳如下:
Nearest Neighbor Classifier
这部分介绍了Nearest Neighbor Classifier。虽然这个分类器和CNN没有半毛钱关系,也很少用在实际中,但是它能让我们知道对于解决图片分类问题的基本想法。
首先介绍一个image classification dataset:CIFAR-10。
http://www.cs.toronto.edu/~kriz/cifar.html
这个数据集由60000张32*32像素的图片组成,共10个类别。其中training set大小为50000,test set大小为10000。
见下图:
现在假设我们有了这50000张图片,我们想要标注出其余10000张图片的标签。Nearest Neighbor Classifier将会这样做:
取出一张待标注的图片,将其与50000张图片一一比较,然后根据他们间的“距离”来标注其标签。
他们的“距离”是这样定义的:
这个例子中,这两张图片维度是32*32*3的矩阵。假设为向量
I1,I2
,其中一种衡量“距离”的方法是L1 distance:
例子如下:
接下来是这种思路的代码实现:
首先载入数据,有4个矩阵:the training data/labels and the test data/labels。
其中Xtr大小为50000*32*32*3,Ytr为50000*1。
Xtr, Ytr, Xte, Yte = load_CIFAR10('data/cifar10/') # a magic function we provide
# flatten out all images to be one-dimensional
Xtr_rows = Xtr.reshape(Xtr.shape[0], 32 * 32 * 3) # Xtr_rows becomes 50000 x 3072
Xte_rows = Xte.reshape(Xte.shape[0], 32 * 32 * 3) # Xte_rows becomes 10000 x 3072
接下来是整个想法的框架:
nn = NearestNeighbor() # create a Nearest Neighbor classifier class
nn.train(Xtr_rows, Ytr) # train the classifier on the training images and labels
Yte_predict = nn.predict(Xte_rows) # predict labels on the test images
# and now print the classification accuracy, which is the average number
# of examples that are correctly predicted (i.e. label matches)
print 'accuracy: %f' % ( np.mean(Yte_predict == Yte) )
类定义:
import numpy as np
class NearestNeighbor(object):
def __init__(self):
pass
def train(self, X, y):
""" X is N x D where each row is an example. Y is 1-dimension of size N """
# the nearest neighbor classifier simply remembers all the training data
self.Xtr = X
self.ytr = y
def predict(self, X):
""" X is N x D where each row is an example we wish to predict label for """
num_test = X.shape[0]
# lets make sure that the output type matches the input type
Ypred = np.zeros(num_test, dtype = self.ytr.dtype)
# loop over all test rows
for i in xrange(num_test):
# find the nearest training image to the i'th test image
# using the L1 distance (sum of absolute value differences)
distances = np.sum(np.abs(self.Xtr - X[i,:]), axis = 1)
min_index = np.argmin(distances) # get the index with smallest distance
Ypred[i] = self.ytr[min_index] # predict the label of the nearest example
return Ypred
L2 distance的定义:
对应的代码改为:
distances = np.sqrt(np.sum(np.square(self.Xtr - X[i,:]), axis = 1))
K-Nearest Neighbor Classifier
其想法很简单,就是用“距离”最近的k张图片来代替原来的一张图片。
例子如下图:
可见kNN算法拥有更平滑的边界。kNN在现实中比较常用,但k的选择常常有困难,以下是用validation dataset的方法来测试的:
# assume we have Xtr_rows, Ytr, Xte_rows, Yte as before
# recall Xtr_rows is 50,000 x 3072 matrix
Xval_rows = Xtr_rows[:1000, :] # take first 1000 for validation
Yval = Ytr[:1000]
Xtr_rows = Xtr_rows[1000:, :] # keep last 49,000 for train
Ytr = Ytr[1000:]
# find hyperparameters that work best on the validation set
validation_accuracies = []
for k in [1, 3, 5, 10, 20, 50, 100]:
# use a particular value of k and evaluation on validation data
nn = NearestNeighbor()
nn.train(Xtr_rows, Ytr)
# here we assume a modified NearestNeighbor class that can take a k as input
Yval_predict = nn.predict(Xval_rows, k = k)
acc = np.mean(Yval_predict == Yval)
print 'accuracy: %f' % (acc,)
# keep track of what works on the validation set
validation_accuracies.append((k, acc))
除此之外,还有n-fold cross-validation方法来选择k,以下是一个例子:
随后介绍了ANN的优缺点,ANN的训练并不需要太多时间,反而在predict的时候事件复杂度高,与神经网络相反。
Summary:Applying kNN in practice
在实际中使用KNN的流程总结: