python 写入网络视频文件很慢_用Python将数据写入LMDB非常慢

作者分享了在使用Caffe训练模型时,对比HDF5和LMDB数据集创建,发现LMDB写入速度慢的问题。通过调整交易大小、利用RAMdisk和内存数据库操作,提供了解决方案和代码实例,帮助读者提高性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1586010002-jmsa.png

Creating datasets for training with Caffe I both tried using HDF5 and LMDB. However, creating a LMDB is very slow even slower than HDF5. I am trying to write ~20,000 images.

Am I doing something terribly wrong? Is there something I am not aware of?

This is my code for LMDB creation:

DB_KEY_FORMAT = "{:0>10d}"

db = lmdb.open(path, map_size=int(1e12))

curr_idx = 0

commit_size = 1000

for curr_commit_idx in range(0, num_data, commit_size):

with in_db_data.begin(write=True) as in_txn:

for i in range(curr_commit_idx, min(curr_commit_idx + commit_size, num_data)):

d, l = data[i], labels[i]

im_dat = caffe.io.array_to_datum(d.astype(float), label=int(l))

key = DB_KEY_FORMAT.format(curr_idx)

in_txn.put(key, im_dat.SerializeToString())

curr_idx += 1

db.close()

As you can see I am creating a transaction for every 1,000 images, because I thought creating a transaction for each image would create an overhead, but it seems this doesn't influence performance too much.

解决方案

In my experience, I've had 50-100 ms writes to LMDB from Python writing Caffe data on ext4 hard disk on Ubuntu. That's why I use tmpfs (RAM disk functionality built into Linux) and get these writes done in around 0.07 ms. You can make smaller databases on your ramdisk and copy them to a hard disk and later train on all of them. I'm making around 20-40GB ones as I have 64 GB of RAM.

Some pieces of code to help you guys dynamically create, fill and move LMDBs to storage. Feel free to edit it to fit your case. It should save you some time getting your head around how LMDB and file manipulation works in Python.

import shutil

import lmdb

import random

def move_db():

global image_db

image_db.close();

rnd = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5))

shutil.move( fold + 'ram/train_images', '/storage/lmdb/'+rnd)

open_db()

def open_db():

global image_db

image_db = lmdb.open(os.path.join(fold, 'ram/train_images'),

map_async=True,

max_dbs=0)

def write_to_lmdb(db, key, value):

"""

Write (key,value) to db

"""

success = False

while not success:

txn = db.begin(write=True)

try:

txn.put(key, value)

txn.commit()

success = True

except lmdb.MapFullError:

txn.abort()

# double the map_size

curr_limit = db.info()['map_size']

new_limit = curr_limit*2

print '>>> Doubling LMDB map size to %sMB ...' % (new_limit>>20,)

db.set_mapsize(new_limit) # double it

...

image_datum = caffe.io.array_to_datum( transformed_image, label )

write_to_lmdb(image_db, str(itr), image_datum.SerializeToString())

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值