直接使用训练好的VGG16测试

该博客介绍了VGG16实验环境搭建过程。需下载用VGG16在ImageNet数据集训练好的权重数据vgg16.npy、imagenet_classes.py,创建vgg16_v1.py,将一张图片与相关文件放于同一文件夹,运行即可。还给出了vgg16.npy的百度网盘链接及imagenet_classes.py的下载地址。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

实验环境:

1、下载用VGG16在ImageNet数据集上训练好的权重数据 vgg16.npy  链接:https://pan.baidu.com/s/1gg9jLw3  密码:umce

2、下载imagenet_classes.py (1000个类别,tf.argmax 返回值就是imagenet_classes中行号对应的类别),下载地址:http://www.cs.toronto.edu/~frossard/post/vgg16/

3、创建 vgg16_v1.py

4、将一张猫图片(或其他图片),vgg16_v1.py,imagenet_classes.py,vgg16.npy放在同一个文件夹在

5、运行,ok....

vgg16_v1.py如下:

import tensorflow as tf
import numpy as np
import cv2
import imagenet_classes 


class vgg16:
    def __init__(self, imgs, weights=None, sess=None):
        self.imgs = imgs
        self.convlayers()
        self.fc_layers()
        if weights is not None and sess is not None:
            self.load_weights(weights, sess)

    def convlayers(self):
        self.parameters = []
        # zero-mean input
        with tf.name_scope('preprocess') as scope:
            mean = tf.constant([123.68, 116.779, 103.939], dtype=tf.float32, shape=[1, 1, 1, 3], name='img_mean')
            images = self.imgs-mean

        # conv1_1
        with tf.name_scope('conv1_1') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),trainable=True, name='biases')
            conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv1_1 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv1_2
        with tf.name_scope('conv1_2') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.conv1_1, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv1_2 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # pool1
        self.pool1 = tf.nn.max_pool(self.conv1_2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME',name='pool1')
                              
        # conv2_1
        with tf.name_scope('conv2_1') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.pool1, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv2_1 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv2_2
        with tf.name_scope('conv2_2') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),trainable=True, name='biases')                                        
            conv = tf.nn.conv2d(self.conv2_1, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv2_2 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]
        # pool2
        self.pool2 = tf.nn.max_pool(self.conv2_2,ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME', name='pool2')
                               
        # conv3_1
        with tf.name_scope('conv3_1') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.pool2, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv3_1 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv3_2
        with tf.name_scope('conv3_2') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')                                          
            conv = tf.nn.conv2d(self.conv3_1, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv3_2 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv3_3
        with tf.name_scope('conv3_3') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.conv3_2, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv3_3 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # pool3
        self.pool3 = tf.nn.max_pool(self.conv3_3,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool3')
                               
        # conv4_1
        with tf.name_scope('conv4_1') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')                                          
            conv = tf.nn.conv2d(self.pool3, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv4_1 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv4_2
        with tf.name_scope('conv4_2') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')                                      
            conv = tf.nn.conv2d(self.conv4_1, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv4_2 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv4_3
        with tf.name_scope('conv4_3') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.conv4_2, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv4_3 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # pool4
        self.pool4 = tf.nn.max_pool(self.conv4_3,ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME',name='pool4')
                               
        # conv5_1
        with tf.name_scope('conv5_1') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')                                         
            conv = tf.nn.conv2d(self.pool4, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv5_1 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv5_2
        with tf.name_scope('conv5_2') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,stddev=1e-1), name='weights')
            conv = tf.nn.conv2d(self.conv5_1, kernel, [1, 1, 1, 1], padding='SAME')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')
            out = tf.nn.bias_add(conv, biases)
            self.conv5_2 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # conv5_3
        with tf.name_scope('conv5_3') as scope:
            kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
            biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),trainable=True, name='biases')                                      
            conv = tf.nn.conv2d(self.conv5_2, kernel, [1, 1, 1, 1], padding='SAME')
            out = tf.nn.bias_add(conv, biases)
            self.conv5_3 = tf.nn.relu(out, name=scope)
            self.parameters += [kernel, biases]

        # pool5
        self.pool5 = tf.nn.max_pool(self.conv5_3,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool4')
                               
    def fc_layers(self):
        # fc1
        with tf.name_scope('fc6') as scope:
            shape = int(np.prod(self.pool5.get_shape()[1:]))
            fc1w = tf.Variable(tf.truncated_normal([shape, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
            fc1b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),trainable=True, name='biases')
            pool5_flat = tf.reshape(self.pool5, [-1, shape])
            fc1l = tf.nn.bias_add(tf.matmul(pool5_flat, fc1w), fc1b)
            self.fc1 = tf.nn.relu(fc1l)
            self.parameters += [fc1w, fc1b]

        # fc2
        with tf.name_scope('fc7') as scope:
            fc2w = tf.Variable(tf.truncated_normal([4096, 4096],dtype=tf.float32,stddev=1e-1), name='weights')
            fc2b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),trainable=True, name='biases')
            fc2l = tf.nn.bias_add(tf.matmul(self.fc1, fc2w), fc2b)
            self.fc2 = tf.nn.relu(fc2l)
            self.parameters += [fc2w, fc2b]

        # fc3
        with tf.name_scope('fc8') as scope:
            fc3w = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32,stddev=1e-1), name='weights')
            fc3b = tf.Variable(tf.constant(1.0, shape=[1000], dtype=tf.float32),trainable=True, name='biases')
            self.fc3l = tf.nn.bias_add(tf.matmul(self.fc2, fc3w), fc3b)
            self.parameters += [fc3w, fc3b]

    def load_weights(self, weight_file, sess):
        data_dict = np.load(weight_file, encoding='latin1').item()
        keys = sorted(data_dict.keys())
#         print(len(keys),len(self.parameters))
        for i,key in enumerate(keys):
            weights = data_dict[key][0]
            biases = data_dict[key][1]
#             print(i,key,'w=',data_dict[key][0].shape,'b=',data_dict[key][1].shape)
            sess.run(self.parameters[2*i].assign(data_dict[key][0]))
            sess.run(self.parameters[2*i+1].assign(data_dict[key][1]))
           
    def predict(self):
        return tf.argmax(tf.nn.softmax(self.fc3l),1)        

if __name__ == '__main__':
    weigth='E:/deepLearningModel/vgg16.npy' #我把vgg16.npy放在E:/deepLearningModel/
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        imgs = tf.placeholder(tf.float32, [None, 224, 224, 3])
        vgg = vgg16(imgs, weigth, sess)
        preData = cv2.imread('cat.1.jpg')
        img1 =cv2.resize(preData,(224, 224))
        prob = sess.run(vgg.predict(), feed_dict={vgg.imgs: [img1]})
        print(imagenet_classes.class_names[prob[0]])
    
            
          
        
        

结果

 

### 使用训练VGG 模型测试 CIFAR 数据集 为了使用训练VGG 模型对 CIFAR-10 数据集进行测试,可以按照如下方法操作: #### 准备工作 确保安装必要的 Python 库,如 TensorFlow 和 Keras。可以通过 pip 安装这些库。 ```bash pip install tensorflow keras numpy matplotlib ``` #### 加载并准备数据集 CIFAR-10 是一个标准的数据集,在 Keras 中可以直接调用。需要注意的是,原始 CIFAR-10 图像大小为 32×32 像素,而大多数预训练模型期望输入更大的图片尺寸(通常是 224×224 或更高),因此需要调整图像大小以匹配预训练模型的要求[^2]。 ```python from keras.datasets import cifar10 import numpy as np from PIL import Image from keras.applications.vgg16 import preprocess_input (x_train, y_train), (x_test, y_test) = cifar10.load_data() def resize_images(images): resized_images = [] for img in images: pil_image = Image.fromarray(img.astype('uint8')) resized_img = pil_image.resize((224, 224)) resized_images.append(np.array(resized_img)) return np.array(resized_images) x_test_resized = resize_images(x_test) x_test_preprocessed = preprocess_input(x_test_resized) ``` #### 调整类别标签 因为 CIFAR-10 的标签是从 0 到 9 编号,但是某些情况下可能需要将其转换成 one-hot 编码形式以便于后续处理。 ```python from keras.utils import to_categorical y_test_one_hot = to_categorical(y_test, num_classes=10) ``` #### 导入预训练模型 Keras 提供了一个简单的接口来导入带有 ImageNet 权重初始化的 VGG16 模型。这里去掉顶部全连接层,并冻结所有卷积基底参数不参与反向传播更新过程。 ```python from keras.applications import VGG16 base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) for layer in base_model.layers: layer.trainable = False ``` #### 构建顶层分类器 构建一个新的顶层分类器附加到已有的特征提取部分之上。这一步骤通常涉及添加全局平均池化层和平滑线性单元激活函数后的密集层作为输出节点数等于目标类别的数量。 ```python from keras.models import Model from keras.layers import Dense, GlobalAveragePooling2D x = base_model.output x = GlobalAveragePooling2D()(x) predictions = Dense(10, activation='softmax')(x) model = Model(inputs=base_model.input, outputs=predictions) ``` #### 测试模型性能 编译模型之后就可以利用测试集中的一部分样本验证模型的表现了。注意设置合适的损失函数和优化算法。 ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) scores = model.evaluate(x_test_preprocessed, y_test_one_hot, verbose=1) print(f'Test accuracy: {scores[1]*100:.2f}%') ``` 通过上述步骤,能够成功地应用预训练好的 VGG 模型对 CIFAR-10 进行预测分析。值得注意的是,由于两者之间存在差异——比如图像分辨率的不同以及源域与目标域之间的差距——实际效果可能会有所折扣。不过这种方法仍然提供了一种快速有效的迁移学习途径[^4]。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值