Results Generaton - Environment or Test Generator?

本文探讨了在测试环境中进行结果验证的不同方法,包括由测试生成预期结果、由测试生成器生成预期结果以及由测试环境基础设施生成预期结果。文章详细讨论了每种方法的优缺点,并倾向于推荐使用测试环境基础设施来生成预期结果。
 

Results Generaton - Environment or Test Generator?

There are a variety of ways to do results checking in a testbench.  They tend to fall into three categories.

  1. Tests generate expected results.  The test writer is assumed to have a detailed understanding of what his or her test is trying to accomplish and is given methods to flag an error should something unexpected occur.
  2. Test generators generate expected results based on knowing the effect certain commands in any given test sequence have on the Device Under Test (DUT).  The generator has the ability to generate a large number of sequences based on general templates created by its developer.  Results checking for directed tests must be done using method #1 above. 
  3. Monitors and other testbench infrastructure in the verification environment generate expected results based on observing the inputs to and outputs from the DUT.  Stimulus can be provided by a test generator or directed test.

Almost every time I've worked on a new testbench I end up having a discussion with someone about this topic.  Why?  Because unlike some who prefer options 1 or 2, I am a big fan of option 3 (testbench infrastructure generates expected results). 

If you're serious about writing robust and reusable testbenches you'll know that option 1 shouldn't be considered a choice for general testing.  It should be reserved instead as a way to handle testing of special cases and other assorted odds and ends that show up as you're trying to fill those final gaps in your functional coverage.  But what about option 2?  Since a test generator knows what it is trying to accomplish it seems natural to expect it to be the source of generated results.  In fact, it's possible to successfully verify complex chips using this method (unlike the strategy where test writers generate their own checking, which breaks down quickly as verification requirements become more complicated). 

Why would anyone want to go through the added trouble to allow the environment to predict the expected results when the generator already knows the answer?  Simple.  What happens to your test environment if I remove the test generator?  Ah, you might argue, but my test generator is always present!  Oh really?  You're quite certain?  Let me ask you this question.  Is your test generator always present because it can easily be adapted to drive the exact same types of traffic at the full chip as it did in your module level test environment, or do you not reuse your test environments because the checking is in the generator? "What?" you say... "Who needs module level test environments that can be run at the full chip?"  Only verification engineers and designers who don't want to have to spend hours debugging problems that could have been caught immediately by the checkers present in your module level environment.  Other than that, no one important.  So what are the other pros and cons of implementing checking in your environment?  Here are some that I find the most interesting:

Pros

  1. Environments can be reused in many levels of simulation (already mentioned).
  2. Environments that know how to predict expected results can be packaged up with the RTL and reused (internally or externally) as an IP block.
  3. Directed tests can take advantage of the built in checking, in many cases making them easier to write.
  4. Test generators are easier to write and debug if you don't have to create tap points to dump out expected results.  In many cases they can be as simple as fire and forget (think of the test environment for an Ethernet controller or a switch).

Cons

  1. It takes more time to make the test environment self sufficient.
  2. It may be exceedingly difficult (in some cases, impossible) to know what to expect without either knowing the intent of the generator or monitoring the internal state of the DUT.
  3. Because the test generator may be fire and forget, development work will need to be done to be able to know when a test is really over.  Specman has some built in methods to handle this type of scenario (you may need to write this yourself in other languages) by allowing various parts of the environment (monitors, scoreboards, transaction generators, etc) to object to a test being complete.  The test is only over when no one has any objections left.

I believe the pros outweigh the cons in most cases, which is why I tend to write my checking into my environment.  Your mileage may vary.

AttributeError Traceback (most recent call last) Cell In[23], line 20 18 import matplotlib.pyplot as plt 19 import matplotlib.patches as mpatches ---> 20 get_ipython().run_line_magic('matplotlib', 'inline') 21 import sys 22 import warnings File c:\Users\L\.conda\envs\music_demo\Lib\site-packages\IPython\core\interactiveshell.py:2504, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth) 2502 kwargs['local_ns'] = self.get_local_scope(stack_depth) 2503 with self.builtin_trap: -> 2504 result = fn(*args, **kwargs) 2506 # The code below prevents the output from being displayed 2507 # when using magics with decorator @output_can_be_silenced 2508 # when the last Python token in the expression is a ';'. 2509 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): File c:\Users\L\.conda\envs\music_demo\Lib\site-packages\IPython\core\magics\pylab.py:103, in PylabMagics.matplotlib(self, line) 98 print( 99 "Available matplotlib backends: %s" 100 % _list_matplotlib_backends_and_gui_loops() 101 ) 102 else: --> 103 gui, backend = self.shell.enable_matplotlib(args.gui) ... 217 return props[name].__get__(instance) --> 218 raise AttributeError( 219 f"module {cls.__module__!r} has no attribute {name!r}") AttributeError: module 'matplotlib' has no attribute 'backends' Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... Error in callback <function _enable_matplotlib_integration.<locals>.configure_once at 0x0000020BFC728B80> (for post_run_cell), with arguments args (<ExecutionResult object at 20bfeabe810, execution_count=23 error_before_exec=None error_in_exec=module 'matplotlib' has no attribute 'backends' info=<ExecutionInfo object at 20bfeabef90, raw_cell="#Importing Libraries import tensorflow import num.." transformed_cell="#Importing Libraries import tensorflow import num.." store_history=True silent=False shell_futures=True cell_id=vscode-notebook-cell:/d%3A/%E6%A1%8C%E9%9D%A2%E6%96%87%E4%BB%B6/punk/LSTMmusic_generaton/coding.ipynb#W0sZmlsZ
08-03
源码地址: https://pan.quark.cn/s/d1f41682e390 miyoubiAuto 米游社每日米游币自动化Python脚本(务必使用Python3) 8更新:更换cookie的获取地址 注意:禁止在B站、贴吧、或各大论坛大肆传播! 作者已退游,项目不维护了。 如果有能力的可以pr修复。 小引一波 推荐关注几个非常可爱有趣的女孩! 欢迎B站搜索: @嘉然今天吃什么 @向晚大魔王 @乃琳Queen @贝拉kira 第三方库 食用方法 下载源码 在Global.py中设置米游社Cookie 运行myb.py 本地第一次运行时会自动生产一个文件储存cookie,请勿删除 当前仅支持单个账号! 获取Cookie方法 浏览器无痕模式打开 http://user.mihoyo.com/ ,登录账号 按,打开,找到并点击 按刷新页面,按下图复制 Cookie: How to get mys cookie 当触发时,可尝试按关闭,然后再次刷新页面,最后复制 Cookie。 也可以使用另一种方法: 复制代码 浏览器无痕模式打开 http://user.mihoyo.com/ ,登录账号 按,打开,找到并点击 控制台粘贴代码并运行,获得类似的输出信息 部分即为所需复制的 Cookie,点击确定复制 部署方法--腾讯云函数版(推荐! ) 下载项目源码和压缩包 进入项目文件夹打开命令行执行以下命令 xxxxxxx为通过上面方式或取得米游社cookie 一定要用双引号包裹!! 例如: png 复制返回内容(包括括号) 例如: QQ截图20210505031552.png 登录腾讯云函数官网 选择函数服务-新建-自定义创建 函数名称随意-地区随意-运行环境Python3....
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值