Results Generaton - Environment or Test Generator?

本文探讨了在测试环境中进行结果验证的不同方法,包括由测试生成预期结果、由测试生成器生成预期结果以及由测试环境基础设施生成预期结果。文章详细讨论了每种方法的优缺点,并倾向于推荐使用测试环境基础设施来生成预期结果。
 

Results Generaton - Environment or Test Generator?

There are a variety of ways to do results checking in a testbench.  They tend to fall into three categories.

  1. Tests generate expected results.  The test writer is assumed to have a detailed understanding of what his or her test is trying to accomplish and is given methods to flag an error should something unexpected occur.
  2. Test generators generate expected results based on knowing the effect certain commands in any given test sequence have on the Device Under Test (DUT).  The generator has the ability to generate a large number of sequences based on general templates created by its developer.  Results checking for directed tests must be done using method #1 above. 
  3. Monitors and other testbench infrastructure in the verification environment generate expected results based on observing the inputs to and outputs from the DUT.  Stimulus can be provided by a test generator or directed test.

Almost every time I've worked on a new testbench I end up having a discussion with someone about this topic.  Why?  Because unlike some who prefer options 1 or 2, I am a big fan of option 3 (testbench infrastructure generates expected results). 

If you're serious about writing robust and reusable testbenches you'll know that option 1 shouldn't be considered a choice for general testing.  It should be reserved instead as a way to handle testing of special cases and other assorted odds and ends that show up as you're trying to fill those final gaps in your functional coverage.  But what about option 2?  Since a test generator knows what it is trying to accomplish it seems natural to expect it to be the source of generated results.  In fact, it's possible to successfully verify complex chips using this method (unlike the strategy where test writers generate their own checking, which breaks down quickly as verification requirements become more complicated). 

Why would anyone want to go through the added trouble to allow the environment to predict the expected results when the generator already knows the answer?  Simple.  What happens to your test environment if I remove the test generator?  Ah, you might argue, but my test generator is always present!  Oh really?  You're quite certain?  Let me ask you this question.  Is your test generator always present because it can easily be adapted to drive the exact same types of traffic at the full chip as it did in your module level test environment, or do you not reuse your test environments because the checking is in the generator? "What?" you say... "Who needs module level test environments that can be run at the full chip?"  Only verification engineers and designers who don't want to have to spend hours debugging problems that could have been caught immediately by the checkers present in your module level environment.  Other than that, no one important.  So what are the other pros and cons of implementing checking in your environment?  Here are some that I find the most interesting:

Pros

  1. Environments can be reused in many levels of simulation (already mentioned).
  2. Environments that know how to predict expected results can be packaged up with the RTL and reused (internally or externally) as an IP block.
  3. Directed tests can take advantage of the built in checking, in many cases making them easier to write.
  4. Test generators are easier to write and debug if you don't have to create tap points to dump out expected results.  In many cases they can be as simple as fire and forget (think of the test environment for an Ethernet controller or a switch).

Cons

  1. It takes more time to make the test environment self sufficient.
  2. It may be exceedingly difficult (in some cases, impossible) to know what to expect without either knowing the intent of the generator or monitoring the internal state of the DUT.
  3. Because the test generator may be fire and forget, development work will need to be done to be able to know when a test is really over.  Specman has some built in methods to handle this type of scenario (you may need to write this yourself in other languages) by allowing various parts of the environment (monitors, scoreboards, transaction generators, etc) to object to a test being complete.  The test is only over when no one has any objections left.

I believe the pros outweigh the cons in most cases, which is why I tend to write my checking into my environment.  Your mileage may vary.

AttributeError Traceback (most recent call last) Cell In[23], line 20 18 import matplotlib.pyplot as plt 19 import matplotlib.patches as mpatches ---> 20 get_ipython().run_line_magic('matplotlib', 'inline') 21 import sys 22 import warnings File c:\Users\L\.conda\envs\music_demo\Lib\site-packages\IPython\core\interactiveshell.py:2504, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth) 2502 kwargs['local_ns'] = self.get_local_scope(stack_depth) 2503 with self.builtin_trap: -> 2504 result = fn(*args, **kwargs) 2506 # The code below prevents the output from being displayed 2507 # when using magics with decorator @output_can_be_silenced 2508 # when the last Python token in the expression is a ';'. 2509 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): File c:\Users\L\.conda\envs\music_demo\Lib\site-packages\IPython\core\magics\pylab.py:103, in PylabMagics.matplotlib(self, line) 98 print( 99 "Available matplotlib backends: %s" 100 % _list_matplotlib_backends_and_gui_loops() 101 ) 102 else: --> 103 gui, backend = self.shell.enable_matplotlib(args.gui) ... 217 return props[name].__get__(instance) --> 218 raise AttributeError( 219 f"module {cls.__module__!r} has no attribute {name!r}") AttributeError: module 'matplotlib' has no attribute 'backends' Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... Error in callback <function _enable_matplotlib_integration.<locals>.configure_once at 0x0000020BFC728B80> (for post_run_cell), with arguments args (<ExecutionResult object at 20bfeabe810, execution_count=23 error_before_exec=None error_in_exec=module 'matplotlib' has no attribute 'backends' info=<ExecutionInfo object at 20bfeabef90, raw_cell="#Importing Libraries import tensorflow import num.." transformed_cell="#Importing Libraries import tensorflow import num.." store_history=True silent=False shell_futures=True cell_id=vscode-notebook-cell:/d%3A/%E6%A1%8C%E9%9D%A2%E6%96%87%E4%BB%B6/punk/LSTMmusic_generaton/coding.ipynb#W0sZmlsZ
08-03
需求响应动态冰蓄冷系统与需求响应策略的优化研究(Matlab代码实现)内容概要:本文围绕需求响应动态冰蓄冷系统及其优化策略展开研究,结合Matlab代码实现,探讨了在电力需求侧管理背景下,冰蓄冷系统如何通过优化运行策略参与需求响应,以实现削峰填谷、降低用电成本和提升能源利用效率的目标。研究内容包括系统建模、负荷预测、优化算法设计(如智能优化算法)以及多场景仿真验证,重点分析不同需求响应机制下系统的经济性和运行特性,并通过Matlab编程实现模型求解与结果可视化,为实际工程应用提供理论支持和技术路径。; 适合人群:具备一定电力系统、能源工程或自动化背景的研究生、科研人员及从事综合能源系统优化工作的工程师;熟悉Matlab编程且对需求响应、储能优化等领域感兴趣的技术人员。; 使用场景及目标:①用于高校科研中关于冰蓄冷系统与需求响应协同优化的课题研究;②支撑企业开展楼宇能源管理系统、智慧园区调度平台的设计与仿真;③为政策制定者评估需求响应措施的有效性提供量化分析工具。; 阅读建议:建议读者结合文中Matlab代码逐段理解模型构建与算法实现过程,重点关注目标函数设定、约束条件处理及优化结果分析部分,同时可拓展应用其他智能算法进行对比实验,加深对系统优化机制的理解。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值