前面有人看了看了我的博客商汤PySot的配置使用(1)—siam跟踪算法demo、test、eval,问我怎么用pysot绘制ope曲线,大家可以参考下面的内容。
pysot的官方代码链接https://github.com/STVIR/pysot
一、步骤一:改eval.py的内容
主要是参考https://github.com/StrangerZhang/pysot-toolkit,将里面绘制曲线的代码加进去,大家可以对比一下STVIR/pysot的eval.py和StrangerZhang/pysot-toolkit的eval.py,就能清楚地看到我加了哪些内容。
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
import argparse
from glob import glob
from tqdm import tqdm
from multiprocessing import Pool
from toolkit.datasets import OTBDataset, UAVDataset, LaSOTDataset, \
VOTDataset, NFSDataset, VOTLTDataset
from toolkit.evaluation import OPEBenchmark, AccuracyRobustnessBenchmark, \
EAOBenchmark, F1Benchmark
from toolkit.visualization import draw_success_precision, draw_eao, draw_f1
parser = argparse.ArgumentParser(description='tracking evaluation')
parser.add_argument('--tracker_path', '-p', type=str,
help='tracker result path')
parser.add_argument('--dataset', '-d', type=str,
help='dataset name')
parser.add_argument('--num', '-n', default=1, type=int,
help='number of thread to eval')
parser.add_argument('--tracker_prefix', '-t', default='',
type=str, help='tracker name')
parser.add_argument('--show_video_level', '-s', dest='show_video_level',
action='store_true')
parser.add_argument('--vis', dest='vis', action='store_true')
parser.set_defaults(show_video_level=False)
args = parser.parse_args()
def main():
tracker_dir = os.path.join(args.tracker_path, args.dataset)
trackers = glob(os.path.join(args.tracker_path,
args.dataset,
args.tracker_prefix+'*'))
trackers = [os.path.basename(x) for x in trackers]
assert len(trackers) > 0
args.num = min(args.num, len(trackers))
root = os.path.realpath(os.path.join(os.path.dirname(__file__),
'../testing_dataset'))
root = os.path.join(root, args.dataset)
if 'OTB' in args.dataset:
dataset = OTBDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
benchmark = OPEBenchmark(dataset)
success_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
trackers), desc='eval success', total=len(trackers), ncols=100):
success_ret.update(ret)
precision_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
trackers), desc='eval precision', total=len(trackers), ncols=100):
precision_ret.update(ret)
benchmark.show_result(success_ret, precision_ret,
show_video_level=args.show_video_level)
if args.vis:
for attr, videos in dataset.attr.items():
draw_success_precision(success_ret,
name=dataset.name,
videos=videos,
attr=attr,
precision_ret=precision_ret)
elif 'LaSOT' == args.dataset:
dataset = LaSOTDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
benchmark = OPEBenchmark(dataset)
success_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
trackers), desc='eval success', total=len(trackers), ncols=100):
success_ret.update(ret)
precision_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
trackers), desc='eval precision', total=len(trackers), ncols=100):
precision_ret.update(ret)
norm_precision_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_norm_precision,
trackers), desc='eval norm precision', total=len(trackers), ncols=100):
norm_precision_ret.update(ret)
benchmark.show_result(success_ret, precision_ret, norm_precision_ret,
show_video_level=args.show_video_level)
if args.vis:
draw_success_precision(success_ret,
name=dataset.name,
videos=dataset.attr['ALL'],
attr='ALL',
precision_ret=precision_ret,
norm_precision_ret=norm_precision_ret)
elif 'UAV' in args.dataset:
dataset = UAVDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
benchmark = OPEBenchmark(dataset)
success_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
trackers), desc='eval success', total=len(trackers), ncols=100):
success_ret.update(ret)
precision_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
trackers), desc='eval precision', total=len(trackers), ncols=100):
precision_ret.update(ret)
benchmark.show_result(success_ret, precision_ret,
show_video_level=args.show_video_level)
if args.vis:
for attr, videos in dataset.attr.items():
draw_success_precision(success_ret,
name=dataset.name,
videos=videos,
attr=attr,
precision_ret=precision_ret)
elif 'NFS' in args.dataset:
dataset = NFSDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
benchmark = OPEBenchmark(dataset)
success_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
trackers), desc='eval success', total=len(trackers), ncols=100):
success_ret.update(ret)
precision_ret = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
trackers), desc='eval precision', total=len(trackers), ncols=100):
precision_ret.update(ret)
benchmark.show_result(success_ret, precision_ret,
show_video_level=args.show_video_level)
if args.vis:
for attr, videos in dataset.attr.items():
draw_success_precision(success_ret,
name=dataset.name,
video=videos,
attr=attr,
precision_ret=precision_ret)
elif args.dataset in ['VOT2016', 'VOT2017', 'VOT2018', 'VOT2019']:
dataset = VOTDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
ar_benchmark = AccuracyRobustnessBenchmark(dataset)
ar_result = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(ar_benchmark.eval,
trackers), desc='eval ar', total=len(trackers), ncols=100):
ar_result.update(ret)
benchmark = EAOBenchmark(dataset)
eao_result = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval,
trackers), desc='eval eao', total=len(trackers), ncols=100):
eao_result.update(ret)
ar_benchmark.show_result(ar_result, eao_result,
show_video_level=args.show_video_level)
elif 'VOT2018-LT' == args.dataset:
dataset = VOTLTDataset(args.dataset, root)
dataset.set_tracker(tracker_dir, trackers)
benchmark = F1Benchmark(dataset)
f1_result = {}
with Pool(processes=args.num) as pool:
for ret in tqdm(pool.imap_unordered(benchmark.eval,
trackers), desc='eval f1', total=len(trackers), ncols=100):
f1_result.update(ret)
benchmark.show_result(f1_result,
show_video_level=args.show_video_level)
if args.vis:
draw_f1(f1_result)
if __name__ == '__main__':
main()
二、 步骤二:修改eval.py的configurations
添加了一行“–vis”,即
--tracker_path
../tools/results
--dataset
OTB100
--num
4
--show_video_level
--vis
三、步骤三:安装MikTex
3.1 安装
按照链接https://miktex.org/download#unx,将里面的代码一行往自己终端里复制粘贴回车,即可
3.2 配置
按照目标跟踪——OTB平台的Python版tracker使用,配置一下。
配置到最后一步,记得点一下update now
然后重启电脑。
四、步骤四:运行eval.py
将你的测试结果放入文件result/OTB100目录下:
(我将SiamFC和SiamRPN在OTB100上的测试结果放入该目录下)
运行eval.py,它会提示你MikTex的相关组件没有安装,你点击安装即可,等待一会,就会出现曲线。
当然,你也可以讲其他的测试结果放进该目录,链接如下
链接:https://pan.baidu.com/s/13jeOhKTxswsJ6kSJzoavVg
提取码:wwmg
(结果来自博客(https://blog.youkuaiyun.com/qq_29894613/article/details/102925068))