Datastream scoreboard iterators

Datastream scoreboard iterators are objects that know how to traverse and navigate the
implementation of the scoreboard. They provide high level methods for moving through the
scoreboard and modifying its content at the location of the iterator.


The actual data structure used to implement the datastream scoreboard is entirely private
to the implementation of the foundation classes. However, for implementing user-defined
functionality, the entire content of the scoreboard should be made available to the user,
so that it can be searched and modified. You can do this using iterators without exposing
the  underlying implementation.


Logical streams:


Datastream applications involve transmission, multiplexing, prioritization, or transformation of
data items. A stream in general is a sequence of data elements. In VMM, a stream is composed of
transaction descriptors based on the vmm_data class. A stream of data elements flowing between two
transactors through a vmm_channel instance or to the DUT through a physical interface is a structural
stream. Through multiplexing, it maybe composed of data elements from different inputs or destined to
different outputs. If there is a way to identify the original input a data element came from or the
destination a data element is going to, a structural stream can be said to be composed of mutiple
logical substreams. It is up to you to define the logical streams based on the nature of the
application you are trying to verify.


For example, if you are verifying a router with 16 input and 16 output ports, all data packet going
from 0th input port to 0th output port can be viewed as belonging to the same logical stream giving
rise to a total of 256 logical streams.


Kinds of iterators:


1. Scoreboard iterators (vmm_sb_ds_iter class) - These move from one logic stream of expected data to
   another. An instance is created using vmm_sb_ds::new_sb_iter() method.


2. Stream iterators (vmm_sb_ds_stream_iter class) - These move from one data element to another on a
   single logical data stream. An instance is created using vmm_sb_ds_iter::new_stream_iter() method.



The figure above shows 'm' logical streams, each stream consisting of 'n' data elements. As previously
mentioned, logical streams are user-defined based on the application and within the scoreboard they
correspond to data queues into which data elements are stored. Once the queues are populated, if the
user wishes to modify a data element or delete a few of them, he can do so by the use of iterators.


A set of high level methods have been implemented within the iterator classes, which aid in navigating
through the data queued up in the scoreboard and modify it, if necessary. The methods first(), next(),
last(), prev(), length() are some of the methods that have been implemented within both the iterator
classes.


Note : Logical streams only exist if they contain(or have contained in the past) expected data and the
iterator will only iterate over logical streams that exist.


Sample code:


     class my_scoreboard extends vmm_sb_ds;
           /* creates a scoreboard iterator that iterates over different streams of data */
           vmm_sb_ds_iter iter_over_streams = this.new_sb_iter();  


           for(iter_over_streams.first(); iter_over_streams.is_ok(); iter_over_streams.next()) begin


               /* a stream iterator that scans the data within a stream */
               vmm_sb_sb_stream_iter scan_stream = iter_over_streams.new_stream_iter(); 


               if(scan_stream.find(pkt)) begin
                   repeat(scan_stream.pos()) begin
                       scan_stream.prev();
                       scan_stream.delete();
                   end
               end
           end
     endclass


The sample code shows a very simple scoreboard. Here, a scoreboard iterator "iter_over_streams" is
instantiated and set to iterate over all the streams using the for loop. 


The method first() sets the scoreboard iterator on the first stream in the scoreboard.


The method is_ok() returns TRUE if the iterator is currently positioned on a valid stream, returns
FALSE if the iterator has moved beyond the valid streams.


The method next() moves the iterator to the next applicable stream. A stream iterator "scan_stream"
is then created to iterate over the stream on which "iter_over_streams" is positioned. 


The find() method used in the if-statement is used to locate the next packet in the stream matching
the specified packet "pkt". The repeat loop is used to delete all the packets before the found packet. 


The method pos() returns the current position of the iterator, prev() moves the iterator to the
previous packet and delete() deletes the packet at the position where the iterator is positioned.


Hence, you can see that you can easily traverse across the different streams easily to find/alter any
specific packets  that you need to. Hope you find this useful and use these iterators to make your
verification tasks simpler.


For more information about the complete list of methods, see the VMM Datastream Scoreboard User Guide.
### DataStream 的相关概念与用法 #### 什么是 DataStream? 在 Apache Flink 中,`DataStream` 是表示分布式、连续数据流的核心抽象。它代表了一个可能无限的数据记录集合,并支持各种转换操作来生成新的 `DataStream` 实例[^1]。 #### 数据流的基本特性 从概念上看,`DataStream` 可以被理解为一种持续流动的数据结构,其特点是: - **无界性**:通常情况下,`DataStream` 表示的是一个无穷尽的数据源。 - **有状态计算**:通过内置的状态管理机制,Flink 能够高效地维护和恢复复杂的状态信息。 - **时间语义**:支持事件时间和处理时间两种模式,允许开发者灵活定义基于时间的操作逻辑。 #### 创建 DataStream 可以通过多种方式创建 `DataStream` 对象,最常见的方式是从外部系统读取实时数据: ```java // 使用 SocketTextStreamFunction 从 socket 流中获取数据 DataStream<String> stream = env.socketTextStream("localhost", 9999); ``` 上述代码片段展示了如何利用网络套接字连接到指定地址并接收字符串形式的消息流。 #### 序列化机制的影响 当涉及到复杂的对象类型时,Flink 提供了一种名为 `PojoTypeInfo` 的工具用于自动生成这些类实例的序列化方案。如果遇到非基本类型的属性,则默认采用第三方库 Kryo 来完成序列化的任务[^2]。 #### 执行环境的角色划分 在一个典型的 Flink 集群部署场景下,每个运行的任务都会关联至特定的执行上下文中。其中,`JobMaster` 组件扮演着至关重要的角色——负责调度资源以及协调整个作业生命周期内的各项活动。值得注意的是,在多租户环境下,不同用户的请求可以独立提交给各自的 `JobMaster` 进行单独管控而不互相干扰[^3]。 #### 示例程序展示 下面给出一段简单的 Java 示例代码,演示了如何对来自 Kafka 主题的消息进行过滤后再写回到另一个主题的过程: ```java Properties properties = new Properties(); properties.setProperty("bootstrap.servers", "localhost:9092"); properties.setProperty("group.id", "test"); DataStream<String> kafkaSource = env.addSource(new FlinkKafkaConsumer<>("input-topic", new SimpleStringSchema(), properties)); kafkaSource.filter(value -> value.contains("error")) .addSink(new FlinkKafkaProducer<>("output-topic", new SimpleStringSchema(), properties)); ``` 此脚本首先配置好必要的参数后建立起了同 Kafka 平台之间的双向通信链路;接着应用 filter 方法筛选含有关键字 “error” 的项目最后再将其存入目标存储位置之中。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值