Locust源码分析之stats.py模块(7)

本文深入分析Locust性能测试框架中的stats.py模块,探讨其核心功能,包括常量定义、辅助函数、性能数据全局变量、RequestStats类、StatsEntry类和StatsError类。这些组件协同工作,记录并处理性能测试中的请求状态、错误信息,以及对接web页面的数据输出,为性能测试提供详尽的运行状态和数据记录。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

stats.py模块是性能测试运行过程中测试数据记录的核心模块

常量定义

在该模块中,定义了如下一些常量,我们来分析下这些常量的作用

STATS_NAME_WIDTH = 60 # STATS_NAME宽度设定
CSV_STATS_INTERVAL_SEC = 2 # CSV文件写入间隔设置(s)
CONSOLE_STATS_INTERVAL_SEC = 2 # 默认在console中打印频率间隔时间(s)
CURRENT_RESPONSE_TIME_PERCENTILE_WINDOW = 10 # 页面刷新当前响应时间的间隔时间(s)
CachedResponseTimes = namedtuple("CachedResponseTimes", ["response_times", "num_requests"]) # 缓存响应时间

辅助函数

这些辅助函数主要用来按需计算响应时间


def calculate_response_time_percentile(response_times, num_requests, percent):
    """
    获取已经完成某个比例的响应时间,例如50%, 90%等
    response_times是一个StatsEntry.response_times变量,字典。
    num_requests是已经发送的请求数目。
    percent是我们指定的期望比例
    """
    num_of_request = int((num_requests * percent)) # 期望百分比的请求数量

    processed_count = 0 
    for response_time in sorted(six.iterkeys(response_times), reverse=True):
        # 对响应时间进行排序,耗时从低到高依次查找,直到查找数目大于等于当前所需要的比例为值
        processed_count += response_times[response_time]
        if(num_requests - processed_count <= num_of_request):
            return response_time


def diff_response_time_dicts(latest, old):
    """
    与缓存结合,计算响应时间某个时间段的响应时间的变化量,time是latest的Key,即时间记录是离散的
    """
    new = {}
    for time in latest:
        diff = latest[time] - old.get(time, 0)
        if diff:
            new[time] = diff
    return new

性能数据全局变量

主要用来记录salve节点全局性能数据

global_stats = RequestStats() #全局变量,记录性能测试metrics数据
#全局成功请求数
def on_request_success(request_type, name, response_time, response_length, **kwargs):
    global_stats.log_request(request_type, name, response_time, response_length)
#全局失败请求数
def on_request_failure(request_type, name, response_time, exception, **kwargs):
    global_stats.log_error(request_type, name, exception)
#生成发送给master节点的性能数据
def on_report_to_master(client_id, data):
    data["stats"] = global_stats.serialize_stats()
    data["stats_total"] = global_stats.total.get_stripped_report()
    data["errors"] =  global_stats.serialize_errors()
    global_stats.errors = {}

#slave节点更新全局metrcis
def on_slave_report(client_id, data):
    for stats_data in data["stats"]:
        entry = StatsEntry.unserialize(stats_data)
        request_key = (entry.name, entry.method)
        if not request_key in global_stats.entries:
            global_stats.entries[request_key] = StatsEntry(global_stats, entry.name, entry.method)
        global_stats.entries[request_key].extend(entry)

    for error_key, error in six.iteritems(data["errors"]):
        if error_key not in global_stats.errors:
            global_stats.errors[error_key] = StatsError.from_dict(error)
        else:
            global_stats.errors[error_key].occurences += error["occurences"]
    
    # save the old last_request_timestamp, to see if we should store a new copy
    # of the response times in the response times cache
    old_last_request_timestamp = global_stats.total.last_request_timestamp
    # update the total StatsEntry
    global_stats.total.extend(StatsEntry.unserialize(data["stats_total"]))
    if global_stats.total.last_request_timestamp > old_last_request_timestamp:
        # If we've entered a new second, we'll cache the response times. Note that there 
        # might still be reports from other slave nodes - that contains requests for the same 
        # time periods - that hasn't been received/accounted for yet. This will cause the cache to 
        # lag behind a second or two, but since StatsEntry.current_response_time_percentile() 
        # (which is what the response times cache is used for) uses an approximation of the 
        # last 10 seconds anyway, it should be fine to ignore this. 
        global_stats.total._cache_response_times(global_stats.total.last_request_timestamp)
    
#添加event应答函数
events.request_success += on_request_success 
events.request_failure += on_request_failure
events.report_to_master += on_report_to_master
events.slave_report += on_slave_report

RequestStats类

性能测试状态全局变量,主要用来记录性能测试metrics,如总量、失败、成功、异常请求数等。

这个类是用于实例化global_stats这个变量。而global_stats是一个全局变量,用于记录性能测试的运行状态和数据。

class RequestStats(object):
    def __init__(self):
        self.entries = {}
        self.errors = {}
        self.total = StatsEntry(self, "Total", None, use_response_times_cache=True) # RequestStats类中的核心属性是StatsEntry的实例
        self.start_time = None
    
    @property
    def num_requests(self): # num_requests属性记录了total下的num_requests
        return self.total.num_requests
    
    @property
    def num_failures(self): # num_failures属性记录了total下的num_failures
        return self.total.num_failures
    
    @property
    def last_request_timestamp(self): # last_request_timestamp属性记录了total下的last_request_timestamp
        return self.total.last_request_timestamp
    
    def log_request(self, method, name, response_time, content_length): # 记录一条请求数据信息
        self.total.log(response_time, content_length) # 首先在total中记录
        self.get(name, method).log(response_time, c
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值