caffe 读取数据

本文介绍了使用Caffe进行多标签图像分类任务的两种方法:HDF5数据格式与Python层。HDF5数据格式便于将多标签数据输入Caffe模型,但文件体积可能较大;Python层方式更灵活,适合实现复杂的损失函数。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

(1) HDF5数据

优点:

通常为了利用多标签数据(大部分网上介绍的),将数据准备为HDF5格式。
使用HDF5数据,可以很方便的传入任何数据到caffe,例如仅需在train.prototxt中用如下的形式:

layer {
   name: "data"
   type: "HDF5Data"
   top: "Features"  # normalzied images
   top: "Headposes" # label1
   top: "Genders"   # label2
   top: "Ages"      # label3
   top: "Landmarks" # label4
   hdf5_data_param {
     source: "../hdf5_file/train_list.txt" # do not give the h5 files directly, but the list.
     batch_size: 64
   }
   include { phase:TRAIN }
 }

缺点:

然而存储为HDF5数据通常采用single单精度,或者double双精度类型,使得数据量 动不动就几个G或者几十个G,然而,同一图像数据,准备不同尺度,都需要再度生成HDF5数据格式。

(2) 利用python层

然而另外一个简洁的方式是,定义一个读取数据的python层,用来处理数据,有时间的话,在回来补充完整。
为了方便,贴出文献[1]中的读数据代码:

#coding=gbk
import caffe
import numpy as np
import scipy.io as io
from os.path import join, isfile
class LDLDataLayer(caffe.Layer):
    def setup(self, bottom, top):
        self.top_names = ['data', 'label']
        params = eval(self.param_str)
        self.db_name = params['db_name'] 
        self.batch_size = params['batch_size']
        self.split_idx = params['split_idx']
        self.phase = params['phase']
        if params.has_key('sub_mean'):
          self.sub_mean = params['sub_mean']
        else:
          self.sub_mean = False
        assert(self.split_idx <= 9)
        if isfile(join('data/ldl/DataSets/',self.db_name+'-shuffled.mat')): # 如果存储随机大量索引的则加载
            mat = io.loadmat(join('data/ldl/DataSets/',self.db_name+'-shuffled.mat'))
        else:
            mat = io.loadmat(join('data/ldl/DataSets/',self.db_name+'.mat'))
            data = mat['features']
            label = mat['labels']
            shuffle_idx = np.random.choice(label.shape[0], label.shape[0]) # 随机打乱label
            data = data[shuffle_idx, :]
            label = label[shuffle_idx, :]
            mat = dict({'features':data, 'labels':label})
            io.savemat(join('data/ldl/DataSets/',self.db_name+'-shuffled.mat'), mat)
        self.features = mat['features']
        self.labels = mat['labels']
        self.N, self.D1 = self.features.shape
        _, self.D2 = self.labels.shape
        self.N = int(np.floor(self.labels.shape[0]/10)*10) # 
        # discard extra samples # 为了完成 10 fold ,抛弃了额外的样本。
        self.features = self.features[0:self.N, :]
        self.labels = self.labels[0:self.N, :]
        Ntest = self.N / 10
        self.Ntrain = int(self.N - Ntest)
        if self.phase=='test':
            assert(self.batch_size == Ntest)
        train_test_filter = np.array([False] * self.N)
        train_test_filter[self.split_idx*Ntest:(self.split_idx+1)*Ntest] = True
        self.test_data = self.features[train_test_filter, :]
        self.test_label = self.labels[train_test_filter, :]
        self.train_data = self.features[np.logical_not(train_test_filter), :]
        self.train_label = self.labels[np.logical_not(train_test_filter), :]
        if self.sub_mean:
            print "Subtract mean ... "
            data_mean = np.mean(self.train_data, 0)
            self.train_data = self.train_data - np.tile(data_mean, [self.train_data.shape[0], 1])
            self.test_data = self.test_data - np.tile(data_mean, [self.test_data.shape[0], 1])
        top[0].reshape(self.batch_size,self.D1,1,1)
        top[1].reshape(self.batch_size,self.D2,1,1)

    def forward(self, bottom, top):
        if self.phase == 'train':
            rnd_select = np.random.choice(self.Ntrain, self.batch_size)
            top[0].data[:,:,0,0] = self.train_data[rnd_select, :]
            top[1].data[:,:,0,0] = self.train_label[rnd_select, :]
        elif self.phase == 'test':
            top[0].data[:,:,0,0] = self.test_data
            top[1].data[:,:,0,0] = self.test_label

    def reshape(self, bottom, top):
        pass

    def backward(self, top, propagate_down, bottom):
        pass

其优点是显而易见的,例如如果你想实现triplet loss, 很容易通过python层实现。

参考文献:
1. https://github.com/zeakey/LDLForests [LDLForests-master],

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值