[CS231n@Stanford] Assignment1-Q1 (python) KNN实现

本文介绍斯坦福大学CS231n课程中k-最近邻算法的实现细节,包括不同距离计算方法的对比及交叉验证选择最佳k值的过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

最近在学习斯坦福的深度学习课程CS231n: Convolutional Neural Networks for Visual Recognition. 

课程视频及笔记链接: https://zhuanlan.zhihu.com/p/21930884

作业网址:http://cs231n.github.io/assignment1/

k_nearest_neighbor.py

import numpy as np

class KNearestNeighbor(object):
  """ a kNN classifiers with L2 distance """

  def __init__(self):
    pass

  def train(self, X, y):
    """
    Train the classifiers. For k-nearest neighbors this is just 
    memorizing the training data.

    Inputs:
    - X: A numpy array of shape (num_train, D) containing the training data
      consisting of num_train samples each of dimension D.
    - y: A numpy array of shape (N,) containing the training labels, where
         y[i] is the label for X[i].
    """
    self.X_train = X
    self.y_train = y
    
  def predict(self, X, k=1, num_loops=0):
    """
    Predict labels for test data using this classifiers.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data consisting
         of num_test samples each of dimension D.
    - k: The number of nearest neighbors that vote for the predicted labels.
    - num_loops: Determines which implementation to use to compute distances
      between training points and testing points.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].  
    """
    if num_loops == 0:
      dists = self.compute_distances_no_loops(X)
    elif num_loops == 1:
      dists = self.compute_distances_one_loop(X)
    elif num_loops == 2:
      dists = self.compute_distances_two_loops(X)
    else:
      raise ValueError('Invalid value %d for num_loops' % num_loops)

    return self.predict_labels(dists, k=k)

  def compute_distances_two_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the 
    test data.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data.

    Returns:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ith test point and the jth training
      point.
    """
    num_test = X.shape[0]
    #print 'X.shape :',X.shape
    num_train = self.X_train.shape[0]
    #print 'self.X_train.shape :', self.X_train.shape
    
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
        for j in xrange(num_train):
          
            dists[i,j] = np.sqrt(np.sum(np.square(self.X_train[j,:]-X[i,:])))
                 
        #####################################################################
        # TODO:                                                             #
        # Compute the l2 distance between the ith test point and the jth    #
        # training point, and store the result in dists[i, j]. You should   #
        # not use a loop over dimension.                                    #
        #####################################################################
            pass
        #####################################################################
        #                       END OF YOUR CODE                            #
        #####################################################################
    return dists

  def compute_distances_one_loop(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
        #print (self.X_train - X[i,:]).shape
        dists[i, :] = np.sqrt(np.sum(np.square(self.X_train - X[i,:]),axis = 1))
      
      #######################################################################
      # TODO:                                                               #
      # Compute the l2 distance between the ith test point and all training #
      # points, and store the result in dists[i, :].                        #
      #######################################################################
        pass
      #######################################################################
      #                         END OF YOUR CODE                            #
      #######################################################################
    return dists

  def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train)) 
    #########################################################################
    # TODO:                                                                 #
    # Compute the l2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    #                                                                       #
    # You should implement this function using only basic array operations; #
    # in particular you should not use functions from scipy.                #
    #                                                                       #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################

    dists = np.multiply(np.dot(X,self.X_train.T),-2)  
    sq1 = np.sum(np.square(X),axis=1,keepdims = True)  
    sq2 = np.sum(np.square(self.X_train),axis=1)  
    dists = np.add(dists,sq1)  
    dists = np.add(dists,sq2)  
    dists = np.sqrt(dists)  
    
    
    pass
    #########################################################################
    #                         END OF YOUR CODE                              #
    #########################################################################
    return dists

  def predict_labels(self, dists, k=1):
    """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      gives the distance betwen the ith test point and the jth training point.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].  
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test)
    for i in xrange(num_test):
        # A list of length k storing the labels of the k nearest neighbors to
        # the ith test point.
        closest_y = []
      
        closest_y = self.y_train[np.argsort(dists[i,:])[:k]]
        
        
        #########################################################################
        # TODO:                                                                 #
        # Use the distance matrix to find the k nearest neighbors of the ith    #
        # testing point, and use self.y_train to find the labels of these       #
        # neighbors. Store these labels in closest_y.                           #
        # Hint: Look up the function numpy.argsort.                             #
        #########################################################################
        pass
    
        y_pred[i] = np.argmax(np.bincount(closest_y))  
        #########################################################################
        # TODO:                                                                 #
        # Now that you have found the labels of the k nearest neighbors, you    #
        # need to find the most common label in the list closest_y of labels.   #
        # Store this label in y_pred[i]. Break ties by choosing the smaller     #
        # label.                                                                #
        #########################################################################
        pass
        #########################################################################
        #                           END OF YOUR CODE                            # 
        #########################################################################

    return y_pred

Cross-validation:

num_folds = 5  
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]  
X_train_folds = []  
y_train_folds = []  
################################################################################  
# TODO:                                                                        #  
# Split up the training data into folds. After splitting, X_train_folds and    #  
# y_train_folds should each be lists of length num_folds, where                #  
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #  
# Hint: Look up the numpy array_split function.                                #  
################################################################################  
  
X_train_folds = np.split(X_train,num_folds) 
y_train_folds = np.split(y_train,num_folds)  
  
################################################################################  
#                                 END OF YOUR CODE                             #  
################################################################################  
  
# A dictionary holding the accuracies for different values of k that we find  
# when running cross-validation. After running cross-validation,  
# k_to_accuracies[k] should be a list of length num_folds giving the different  
# accuracy values that we found when using that value of k.  
k_to_accuracies = {}  
  
  
################################################################################  
# TODO:                                                                        #  
# Perform k-fold cross validation to find the best value of k. For each        #  
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #  
# where in each case you use all but one of the folds as training data and the #  
# last fold as a validation set. Store the accuracies for all fold and all     #  
# values of k in the k_to_accuracies dictionary.                               #  
################################################################################  
for k in k_choices:  
    k_to_accuracies[k] = np.zeros(num_folds)  
    for i in range(num_folds):     
        X_tr = X_train_folds[0:i] +  X_train_folds[(i+1):num_folds] 
        X_tr = np.reshape(X_tr, (X_train.shape[0]*(num_folds-1)/num_folds, -1))  
        y_tr = y_train_folds[0:i] +  y_train_folds[(i+1):num_folds]  
        y_tr = np.reshape(y_tr, (X_train.shape[0]*(num_folds-1)/num_folds ))  
        X_te = np.reshape(X_train_folds[i], (X_train.shape[0]*1/num_folds, -1))  
        classifier.train(X_tr,y_tr)  
        y_test_pred = classifier.predict(X_te, k)  
        num_correct = np.sum(y_test_pred == y_train_folds[i])  
        accuracy = float(num_correct) / num_test  
        k_to_accuracies[k][i] = accuracy  
################################################################################  
#                                 END OF YOUR CODE                             #  
################################################################################  
  
# Print out the computed accuracies  
for k in sorted(k_to_accuracies):  
    for accuracy in k_to_accuracies[k]:  
        print 'k = %d, accuracy = %f' % (k, accuracy)  


# plot the raw observations
for k in k_choices:
    accuracies = k_to_accuracies[k]
    plt.scatter([k] * len(accuracies), accuracies)

# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()


选取num_training = 10000,num_test = 1000


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值