参考文章:WebRTC研究:包组时间差计算-InterArrival - 剑痴乎
WebRTC研究:Trendline滤波器-TrendlineEstimator - 剑痴乎
感觉写得挺好的,图都是参考文章的。
PS:还是有很多没懂,记录大概的流程。
一 RTP扩展头Sequence Number、RTCP Feedback包算出receive_time
看一下具体的包是什么样。
bede是1字节的标识,Identifier是3(新版本是3,变化的),看webrtc sdp能知道。Extension Data占2字节,是序列号,从1依次递增。
WebRTC中有RTCB Feedback的具体定义,FMT=15,PT是205。
Packet Chunks:记录包的状态,接收端收到RTP包的时间。可以理解为RTP包的ACK,带时间偏移。
Packet Chunks占2字节,有三种。
1 Run Length Chunk:表示有连续多少包为相同到达状态。
0开头占1位 | 状态占2位| 13位长度。
状态的值:
00-Packet not received
01-Packet received, small delta
10-Packet received, large or negative delta
11-[Reserved]
2 1 bit status Vector Chunk:状态1位表示,能表示14个包。10+14个状态位,跟0x8000 &。
3 2 bit status Vector Chunk:状态2位表示,能表示7个包。11+7个状态位,跟0xc000 &。
状态:0表示没有接收,1表示接收。
recv delta:
zero padding:补0对齐。
后面计算主要取Reference Time。?
二 包组时间差计算-InterArrival
网络延时滤波器即TrendLine filter需要参数:发送时刻差值(timestamp_delta
)、到达时刻差值(arrival_time_delta
)和包组数据大小差值(packet_size_delta
)--这个没用到?。
bool calculated_deltas = inter_arrival_for_packet->ComputeDeltas(
packet_feedback.sent_packet.send_time, //rtp包发送时间
packet_feedback.receive_time, //接收端收到rtp包的时间
at_time, //处理RTCP包系统时间
packet_size.bytes(),
&send_delta, &recv_delta, &size_delta);
TransportFeedbackAdapter::ProcessTransportFeedbackInner(中
packet_feedback.receive_time =
current_offset_ + packet_offset.RoundDownTo(TimeDelta::Millis(1));
const TimeDelta delta = feedback.GetBaseDelta(last_timestamp_)
.RoundDownTo(TimeDelta::Millis(1));
current_offset_ += delta;
current_offset_取的reference time,没有加上recv delta ? 待确定
base_time_ticks_ = ByteReader<int32_t, 3>::ReadBigEndian(&payload[12]);
怎么判断新包组?取的5ms,是5ms的RTP包。现在代码,我看不是5ms了。
bool InterArrival::NewTimestampGroup(int64_t arrival_time_ms,
uint32_t timestamp) const {
if (current_timestamp_group_.IsFirstPacket()) {
return false;
} else if (BelongsToBurst(arrival_time_ms, timestamp)) {
return false;
} else {
uint32_t timestamp_diff =
timestamp - current_timestamp_group_.first_timestamp;
return timestamp_diff > kTimestampGroupLengthTicks;
}
}
kTimestampGroupLengthTick值 ?
constexpr int kTimestampGroupTicks =
(kTimestampGroupLengthMs << kInterArrivalShift) / 1000;
=5<<(18+8)/1000=335544.320
三 Arrival-time filter即Trendline滤波器
d1是网络延时。
概述:最小二乘法,线性回归,算出斜率trend,判断是那种网络状态:正常、过载、不足。
点是(RTP包接收时间差值,平滑延时)。
//网络三种状态:Overuse,网络使用过载,发生拥塞。
enum BandwidthUsage {kBwNormal = 0, kBwUnderusing = 1,kBwOverusing = 2,};
TrendlineEstimator::UpdateTrendline(recv_delta_ms, send_delta_ms, send_time_ms, arrival_time_ms,packet_size);
{
1 const double delta_ms = recv_delta_ms - send_delta_ms;
accumulated_delay_ += delta_ms;
2 smoothed_delay_ = smoothing_coef_ * smoothed_delay_ +
(1 - smoothing_coef_) * accumulated_delay_; //延时平滑下
3 delay_hist_.emplace_back(
static_cast<double>(arrival_time_ms - first_arrival_time_ms_),
smoothed_delay_, accumulated_delay_);
// 0 < trend < 1 -> the delay increases, queues are filling up
// trend == 0 -> the delay does not change
// trend < 0 -> the delay decreases, queues are being emptied
trend = LinearFitSlope(delay_hist_).value_or(trend);//最小二乘法
5 Detect(trend, send_delta_ms, arrival_time_ms);
}
TrendlineEstimator::Detect(double trend, double ts_delta, int64_t now_ms) {
const double modified_trend =
std::min(num_of_deltas_, kMinNumDeltas) * trend * threshold_gain_;//*4.0
//用time_over_using_、overuse_counter_判断,主要是过载。
if (modified_trend > threshold_) {
time_over_using_ += ts_delta;
time_over_using_>10 && trend比原来的大
hypothesis_ = BandwidthUsage::kBwOverusing;
}
else if (modified_trend < -threshold_) {
hypothesis_ = BandwidthUsage::kBwUnderusing;
} else {
hypothesis_ = BandwidthUsage::kBwNormal;
}
}
threshold_在TrendlineEstimator::UpdateThreshold更新。
四 码率控制器AimdRateControl
//注释:no over-use, 加法增加. over-use, 乘法减少. 带宽改变, 不可知, 从慢启动开始.
// A rate control implementation based on additive increases of
// bitrate when no over-use is detected and multiplicative decreases when
// over-uses are detected. When we think the available bandwidth has changes or
// is unknown, we will switch to a "slow-start mode" where we increase
// multiplicatively.
class AimdRateControl {
}
enum class RateControlState { kRcHold, kRcIncrease, kRcDecrease };
AimdRateControl::Update
--| AimdRateControl::ChangeBitrate
constexpr double kDefaultBackoffFactor = 0.85;
LinkCapacityEstimator link_capacity_; //链路容量
AimdRateControl::ChangeBitrate(
{
//estimated_throughput从300kbps开始,限制吞吐量=估计吞吐量*1.5
DataRate estimated_throughput =
input.estimated_throughput.value_or(latest_estimated_throughput_);
ChangeState(input, at_time);
const DataRate troughput_based_limit =
1.5 * estimated_throughput + DataRate::KilobitsPerSec(10);
case RateControlState::kRcIncrease:
if (estimated_throughput > link_capacity_.UpperBound())
{
if (current_bitrate_ < troughput_based_limit
if (link_capacity_.has_estimate()) {
//加性增加
DataRate additive_increase =
AdditiveRateIncrease(at_time, time_last_bitrate_change_);//+0.08
increased_bitrate = current_bitrate_ + additive_increase;
}
else {
//乘法增加--计算一大堆!
}
new_bitrate = std::min(increased_bitrate, troughput_based_limit);
}
case RateControlState::kRcDecrease: {
decreased_bitrate = estimated_throughput * beta_; //*0.85
if (decreased_bitrate > current_bitrate_
if (link_capacity_.has_estimate()) {
decreased_bitrate = beta_ * link_capacity_.estimate();
}
// Avoid increasing the rate when over-using.
if (decreased_bitrate < current_bitrate_) {
new_bitrate = decreased_bitrate;
}
link_capacity_.OnOveruseDetected(estimated_throughput);
rate_control_state_ = RateControlState::kRcHold;
}
void AimdRateControl::ChangeState(const RateControlInput& input,
Timestamp at_time) {
//网络状态:正常, 码率控制状态是Hold, 改成Increase .
//网络状态:overuse过载, 码率控制状态变成Decrease.
//网络状态:underuse, 码率控制状态变成Hold.
}
五 码率最后作用于Pacer、编码、FEC
RtpTransportControllerSend::PostUpdates(
{
pacer()->SetPacingRates(update.pacer_config->data_rate(),
update.pacer_config->pad_rate());
}
基于延时得出的码率:
SendSideBandwidthEstimation::UpdateDelayBasedEstimate(的delay_based_limit_。
基于丢包更新码率:
SendSideBandwidthEstimation::UpdatePacketsLost(