//secon2017
{
007.我的研究分析{
//端到端延时=源节点到目的节点的路由跳数×(处理延时+传输延时+排队延时)
//能耗、跳数、距离、吞吐量、延时*、丢包率*、包个数
//网络负载:竞争时间=数据传输时间+冲突退避时间
//缓存队列占用率、信道利用率
//丢包后退避等待,然后重传
//调整节点发包间隔控制网络流量和负载
//设置延时的随机取值范围
//随机设置节点的发包时间和发包频率
//不同尺度的路由优化:不同评价标准的路由优化
//节点的加入、退出:Hello包路由发现,更新路由表,交换链路状态,进行路由决策,
}
0.Title{
//Reliable Stream Scheduling with Minimum Latency for Wireless Sensor Networks
0.基于最小延迟的无线传感器网络流量调度
}
0.Abstract{
//As sensor networks are increasingly deployed for critical applications,reliability and latency guarantee become more important than ever to meet industrial requirements.
0.随着传感器网络被逐渐部署到关键应用当中,为了满足工业需求,保证网络的延迟和可靠性变得越来越重要。
//In this paper,we investigated the impact of link burstiness on stream scheduling using a data trace of 3600000 packets collected from an indoor testbed.
0.本文中,我们在一个室内测试平台上利用收集到的3600000个数据包对链路上的流量状况进行了分析。
//We demonstrate that a good tradeoff between reliability and latency can be achieved by allocating certain time slots on each link for stream transmissions based on its burst length and frequency distributions.
0.实验表明可以根据频率分布对链路分配传输时隙来平衡延迟与可靠性。
//With this observation,we design transmission scheduling and routing algorithms for data streams to meet a specified reliability requirement while minimizing end-to-end latency.
0.在此发现下,我们设计了传输调度和路由算法。在最小化端到端延迟的前提下,同时满足特定的数据流需求。
//For the multi-stream scheduling problem,we prove its NP-hardness and design an algorithm that achieves the reliability guarantee and an O(log n)approximation of minimizing the maximum end-to-end latency for any stream.
0.对应多数据流调度问题,我们已经证明该问题是一个NP难问题。并设计出一个算法对于任何端到端延迟系统其时间复杂度至少为logN。
//Trace-driven simulations show that our solution meets specified end-to-end reliability requirements with latency up to 9.18 times less than existing solutions.
0.记录驱动仿真实验表明,我们的解决方案满足端到端的可靠性需求且比现有的解决方案提高9.18倍。
}
1.Introduction{
1.As{
//As sensing and wireless technologies become mature,wireless sensor networks are increasingly deployed for applications including industrial plant monitoring,building occupancy sensing,and manufacturing pipeline monitoring.
0.随着传感和无线技术的成熟,无线传感器网络正越来越多地部署到工业应用当中:如厂房监测、建造物监测、管道制造监测。
//These applications usually require sensing data to be collected periodically for real-time monitoring and diagnosis purposes.
0.这些应用通常需要定期收集传感数据以便实时监测并进行诊断。
//Therefore,one or more real-time data streams need to be transferred among the sensors and controllers via multi-hop with high reliability to detect events of interest.
0.因此,不止一个的实时数据流需要可靠地通过多跳的方式在传感器和控制器之间进行传输,以监测关注事件。
//However,the problem of achieving specified reliability requirement in multi-hop low power sensor networks is very challenging due to the unpredictable packet loss caused by various interferences and many other factors.
0.然而在多跳低功耗传感器网络中达到指定的可靠性需求非常具有挑战性,因为各种干扰和其他因素会造成不可预知的丢包率。
}
2.There{
//There has been extensive research on link quality modeling and characterizatioin.
0.关于链路质量的建模和特征已经有了广泛的研究。
//Srinivasan et al. developed a link model for measuring the bursty packet loss.
0.Srinivasan等人为测量链路突发性丢包建立了模型。
//Munir et al. focused on capturing the worst-case packet loss on each link.
0.Munir等人试图捕获每条链路在最坏情况下的丢包率。
//These valuable results show that in a closed indoor environment with limited external interferences,certain links have more predictable reliability than others.
0.这些有价值的结果表明,在封闭的外部干扰受限的室内环境中。某些链路具有比其他链路更可靠的预测性。
//Based on this observation,a few wireless sensor network design have successfully achieved 100% reliability in deployed systems.
0.基于这一观察,一些无线传感器网络部署系统的设计已经成功实现了百分之百的可靠性。
//On the other hand, such high reliability is usually obtained at the cost of high latency and energy consumption.
0.另一方面,如此高的可靠性通常是以高延迟和高能耗为代价获得的。
//There are many applications require high reliability but not necessarily 100% while the low end-to-end delay is also preferred.
0.有许多应用需要高可靠性但不一定达到百分之百,但同时也需要较低的端到端延迟。
//For instance, in a home health monitoring system, 95% reliability is sufficient to monitor the health condition of the user and a few packet loss does not affect the monitoring quality significantly.
0.例如,在一个家庭健康监测系统中,百分之95的可靠性就能满足对用户健康条件的监测,而少量的数据包丢失并不会对监测质量造成明显的影响。
//Therefore, it is desirable to find a good tradeoff between reliability and latency in stream data transmission.
0.因此,有必要在数据流传输中寻找一个可靠性与延迟的权衡之策。
}
3.To{
//To address this problem, we first conduct an analysis of a data trace collected from a real indoor sensor network testbed to learn the patterns of packet loss on different links.
0.为了解决这个问题,我们首先从一个实际的室内传感器网络测试平台收集数据包,对不同链路数据包丢失的原因进行分析。
//We show that most bursty links with low burst frequencies have consistent loss patterns over a long time period.
0.我们发现许多拥有低迸发频率的迸发链路拥有很长时间的片段丢失。
//Therefore, by allocating certain time slots on such links,i.e. restricting the maximum number of retransmissions,a specified level of link reliability and corresponding latency bound can be achieved.
0.因此,通过对链路分配时隙,限制重传的最大值,可以对特定可靠性链路的延迟进行限定。
//Our solution is based on the following insight: most of the links do not show their worst case burst behaviors frequently.
0.我们的解决方案基于如下观点:大部分的链路不会频繁地出现最差情况的链路断裂情况。
//Hence,instead of allocating time slots based on maximum burst length ,if we allocate slots based on frequency distribution of bursts subject to link interferences and utilize load-balancing algorithm ,the solution not only offers expected reliability but reduces end-to-end latency significantly.
0.因此,与基于最大断裂时长分配时长相反,如果我们基于断裂分布频率来分配时隙,根据链路的干扰性和负载均衡算法,该方案不仅提供了预期的可靠性,而且还明显降低了端到端延迟。
}
4.Thus{
//Thus,we design a cross-layer solution to schedule streams with different reliability requirements.
0.因此,我们根据不同的可靠性需求设计了一种交叉层方案来调度数据流。
//At link layer,a link reliability table stores the numbers of time slots required to achieve different levels of packet delivery ratio for this link.
0.在链路层,一个链路可靠性的表存储了时隙的数量,该链路需要完成不同级别的数据包交付率。
//At network layer,our algorithm decides the number of allocated slots for each stream on each link to meet the specified reliability requirement and minimize the maximum end-to-end latency bound of all streams.
0.在网络层,我们的算法决定为每一个链路上的每个数据流分配时隙来满足指定可靠性的需求,减少端到端流量延迟的最大值。
//When scheduling single stream,our algorithm meets the reliability requirement with the minimum latency bound.
0.当调度单一数据流时,我们的算法满足最小延迟界限的可靠性需求。
//For multiple data streams in the network, the interferences among streams need to be considered.
0.对于网络中的多个数据流,需要考虑数据流之间的干扰。
//We prove the hardness of this problem and design an approximation algorithm with reliability guarantee and bounding the maximum latency.
0.我们证明了该问题的难度,并设计了一种近似算法来保证可靠性并限定延迟的最大值。
//Differ from previous researches which consider 100% reliability only, our algorithms prove rigorous guarantees on the maximum delay with any given reliability requirements.
0.不同先前只考虑百分之百的可靠性的方案,我们的算法在给定任意可靠性要求的前提下保证延迟的最大值范围。
}
5.Extensive{
//Extensive trace-driven simulations with 300 streams show that our solution successfully meets all the specified reliability requirements:
0.根据300个数据流记录驱动仿真表明,我们的方案成功地满足所有给定的可靠性需求:
//given the stream reliability requirements 95%,97%,99%,the average end-to-end delivery rates achieved are 95.7%,97.76% and 99.16%.
0.给定的流量可靠性需求分别为百分之95,百分之97,百分之99,平均端到端交付率达到百分之95.7,百分之97.6和百分之99.16。
}
6.TheContributions{
//The contributions of this paper are summarized as follows:
0.本文的贡献总结如下:
1.{
//We reveal the impacts of link burstiness on the reliability and latency of stream transmissions with data-driven analysis.
0.我们根据数据驱动分析并揭示了链路断裂对可靠性和数据流传输延迟的影响。
}
2.{
//We provide a cross-layer solution to meet the specified reliability requirement and minimize end-to-end latency bound.
0.提出了一种跨层方案来满足特定可靠性需求并最小化端到端延迟范围。
//We prove the stream scheduling problem is NP-hard and design an algorithm that achieves the specified reliability guarantee and an O(log n)approximation of minimizing the maximum end-to-end latency bound of all streams.
0.证明流量调度问题是NP难问题,设计一种复杂度为logN的近似算法,保证指定的可靠性的同时最小化所有流量的最大延迟。
//Note that the end-to-end latency can not be infinity in such a case as the number of retransmissions are restricted to a finite limit.
0.值得注意的是:在重传数量有限的情况下端到端延迟不能设置为无限大。
}
3.{
//Our solution successfully meets the end-to-end stream reliability requirements while reducing end-to-end latency by up to 9.18 times comparing to existing algorithms in extensive trace-driven evaluations.
0.该方案有效满足端到端流量可靠性需求,同时将端到端延迟降低到现有算法的9.18倍。
}
}
7.TheRest{
//The rest of the paper is organized as follows, Section II states the definition of the problem.
0.文章的剩余部分组织结构如下,第二部分阐述问题的定义。
//In Section III, we discuss how to build tables for trading reliability with delay at link level and analyzes the steadiness of the tables.
0.在第三部分,我们讨论如何根据链路层的延迟等级为传输可靠性创建表,并对表的稳定性进行分析。
//The design of our algorithms is described in Section IV.
0.第四节描述了算法设计。
//We conduct simulations for the proposed algorithms and report the results in Section V.
0.第五节,通过仿真验证算法,并对结论进行总结。
//Finally, Section VII is the conclusion.
0.第七节,总结。
}
8.Fig1{
//A monitoring stream example: transmitting multiple samples to the sink node.
0.监测流量实例:向sink节点传输多组样品。
//Number of packets per batch: p , Batch reliability requirement: e.
0.批处理中的数据包个数:p,批处理的可靠性:e。
//Number of batchs: b, Stream reliability requirement: u.
0.批处理个数:b,流量可靠性需求:u。
}
}
2.Definition{
1.{
//In many monitoring applications, periodic sensing data stream are transmitted from the sensors to the sink via multi-hop network.
0.在许多监测应用中,定期传感数据通过多跳网络从传感器传递到sink节点。
//A typical data stream instance consists of a number of continuous or discrete samples, which are transmitted with a batch, a set of related packets.
0.典型的数据流实例包含大量的连续或离散的样本,通过批处理打包传输一组数据相关的数据包。
//This scenario is summarized as follows:
0.这种情况总结如下:{
1.{
//Each stream generates a batch of multiple packets periodically.
0.每个数据流周期性地产生一批数据包。
}
2.{
//For each stream, there is a stream reliability requirement:
0.每个数据流,都有一定的可靠性需求:
//a specified percentage of batches is required to be delivered for the continuous monitoring purposes.
0.为达到持续的监测目的,需要一定的批数据交付百分比指标。
}
3.{
//For each batch of packets, there is a batch reliability requirement:
0.对于一组数据包,也有包组的可靠性需求。
//a minimal percentage of packets is required to be delivered reliably for each batch, to achieve the quality of a sample.
0.为保证采样质量,每批数据需要有可靠的最小数据包传递百分比。
}
}
}
2.{
//This scenario is shown at Figure 1.
0.如图1中的场景。
//Generally, each data stream is represented by a sequence of batches of packets from source s to sink t.
0.通常,数据流用从源s到sink的一组数据包来表示。
//The batch rate at the source is denoted by b, which represents the number of batches per hour.
0.源节点的批处理速率用b表示,代表每小时发送包组个数。
//The stream reliability requirement is u.
0.数据流可靠性需求为u。
//Each batch has a set of p packets and the batch reliability requirement is e.
0.每批数据包含p个数据包,批数据的可靠性需求为e。
//We formally describe this stream scheduling problem as a 6-tuple(s,t,b,u,p,e).
0.我们将数据流调度问题定义为6元组(源节点s,sink节点t,速率b,流可靠性u,批处理中的包个数p,批组的可靠性e)
}
3.{
//The formal description of our problem is as the following:
0.问题描述定义如下:
//given a set of streams needs to be scheduled, each stream i is represented by 6-tuple(s,t,b,u,p,e)
0.给定一组需要调度的数据流,每个数据流i用6元组来表示。
//how to find routes and transmission schdules to meet the stream and batch reliability requirement u,e of all streams while minimizing maximum latency?
0.如何寻找满足数据流可靠性需求和批数据可靠性需求u,e的路由和传输调度,并最小化所有数据流的最大延迟。
//In this paper, we use time slots as the measurement of latency.
0.本文采用时隙来衡量延迟。
//A time slot is a scheduling time unit such that any node can transmit one packet to another node and receive an acknowledgment if the packet is delivered.
0.每个时隙为一个调度时间单位。时隙内任意节点可以向其他节点传输一个数据包并获得一个确认消息,如果发送的数据包被成功交付。
}
}
3.Table{
//Link Reliability Table Construction
0.链路可靠性的表结构
1.{
//Packet trace collection
0.数据包记录收集。
//We conducted a 21-day experiment on a testbed with 48 Tmote nodes running TinyOS.
0.在一个包含48个运行TinyOS系统的Tmote节点的测试平台上进行了21天的实验。
//For each pair of two nodes, a total of 3600000 packets is transmitted to evaluate packet delivery traces.
0.每对节点,共记录3600000个数据包传输来对包交付率进行评估。
//The transmission rate is approximation 200 packets per second for all links.
0.所有链路的传输速率近似为每秒200个数据包。
//The transmissions of different links are scheduled at different time slots to avoid a collision.
0.不同链路的传输安排在不同的时隙来避免数据冲突。
//The receiving status of each packet at every node is recorded as a sequence of 0 and 1.
0.节点对数据包的接收状态用0和1组成的序列来记录。
//More specifically, a packet delivery trace D cotains a 1 at position d means the d-th packet is successfully received, which we denoted by D[d]=1.
0.具体来讲,数据包记录D包含了第d个包如果成功接收,则用D来表示。
//So for each link, we get a trace D with length 3600000.
0.对应每条链路,我们都记录了长达360000条的记录D。
}
2.{
//Transmission for a batch
0.一批数据的传输
1.{
//Consider a case of sending a batch of packets with the retransmissions at the link layer,
0.当链路层重传一批数据时,
//the transmitter keeps retransmitting the packets to the receiver until it receives the full batch.
0.发射器一直向接收端重传数据包直到接收端接收到整批数据。
//Each transmission takes exact one time slot.
0.每次传输都花费了额外的一个时隙。
//In our analysis, we set the number of allocated time slots l per batch of each link.
0.根据分析,将链路上分配给每批数据的时隙数量设置为l。
//The transmitter will immediately stop retransmitting the rest of packets in the batch if it already uses l time slots.
0.如果已经使用了l个时隙,则发送器将立即停止重传批数据中剩余的数据包。
//We change the number of the allocated time slots l on different links to show how it affects reliability requirements e, u and the latency.
0.通过对不同链路分配不同的时隙长度来揭示其是如何对可靠性需求和延迟造成影响的。
}
2.{
//As the previous definition,
0.如之前所定义的,
//a batch is treated as successfully transmitted if and only if there are more than p*e received packets,
0.如果每批数据中的数据包接收个数达到接收率则表示批数据接收成功。
//where p is the number of packets per batch and e is the batch reliability requirement.
0.其中p为每批数据中的数据包个数,e表示批数据的可靠性需求。
//Therefore, given a trace D, we can calculate the delivery rate,
0.因此,给定一条记录D,我们可以计算出交付率,
//which is the rate of successfully transmitted batches,
0.即批数据的传输成功率,
//if we assume the starting time of the transmission is uniformly distributed in D.
0.假设传输的起始时间在记录D中统一分布。
}
3.{
//In detail, denote |.| is the number of received packets in a length of the trace.
0.详细地将,定义.为记录中接收数据包的个数。
//Given parameters p, l, for an arbitrary starting time slot d at trace D,
0.设参数p,l为记录D中时隙d的任意起始时间,
//the number of received packets during the time slot d to d+l-1 is min(|D[d,d+l-1]|,p)
0.从时间d到d.期间内所接收的数据包个数为min...
}
4.{
//Since we assume the starting time is uniformly distributed on the trace,
0.既然假设记录中的起始时间是统一分布的,
//we calculate the empirical distribution function F respected to the number of received packets among all possible starting time d.
0.根据所有可能的起始时间内的接收数据包个数计算经验分布函数F。
//Therefore, the delivery rate of a batch is actually the value of 1-F.
0.因此,批数据的传输率的实际值为...
}
5.{
//Figure 2 is such an example with batch size p=5 and batch requirement e=80%.
0.图2所示为批数据大小为5且可靠性需求为百分之80的例子。
//It shows the cumulative distribution of received packets in a batch under different allocated time slots l.
0.显示了在不同时隙l下批数据中接收数据包个数的累积分布。
//For simplicity, we only show allocated time slots l being 5, 15, 100 or 1000.
0.我们仅简明地展示了分配时隙l分别为5,15,100,或1000的情况。
}
6.{
//Now, by drawing a vertical line as the threshold of specific batch requirement e with size p,
0.作一条垂线作为批数据可靠性需求与数据包个数的阈值。
//each distribution gives the percentage of batches that statisfies e.
0.每个分布都统计了批数据交付率的百分比。
//For example, allocating 5 time slots for each batch gives you 91.5% delivery rate for e=80%.
0.例如,为每批数据分配5个时隙则传输率为百分之91.5。
//Therefore, we can record this delivery rate under different allocated time slots l.
0.因此,统计不同分配时隙l下的传输率。
}
7.{
//Notice that these tables might cannot reflect link qualities for the long term in reality.
0.值得注意的是表中数据不能反映出现实中较长时段的链路质量。
//However, we can also easily update tables in real time system.
0.然而,我们仍然可以很容易地通过真实的时间系统更新表。
//By taking the transmission results as an additional training set,
0.将传输结果作为一个额外的训练集,
//we can use a weight parameter to combine the old and the new training set for updating tables.
0.为了更新表,我们用权值将历史数据和新的训练集结合。
}
8.Fig2{
//Empirical distribution for different values of l(allocated time slots)
0.不同分配时隙下交付率的经验分布
//Table I: Link reliability table record the batch delivery rate with corresponding allocated time slots at trace D.
0.表1:链路可靠性表记录了分配一定时隙时相应的批数据交付率。
}
}
3.{
//Link reliability table
0.链路可靠性表
1.{
//Based on our empirical analysis,
0.基于经验分析,
//we design the link reliability table as a 2-column data frame recording(batch)delivery rates and allocated time slots on a link.
0.链路可靠性表包含2列数据帧交付率和链路时隙分配。
//It treats delivery rate as the key such that our algorithm can query the minimum allocated time slots for the link to meet specific delivery rate.
0.算法将交付率视为关键因素,为满足指定的交付率算法可以查询最小的链路分配时隙。
//The allocated time slots for different delivery rates are calculated based on the trace analysis.
0.根据记录分析计算出不同时隙对应的交付率。
//The example is shown at Table I.
0.如表1中的实例。
}
2.{
//We found that the tables of different links are highly correlated with the link burstiness,
0.不同链路的可靠性表与链路断裂高度相关。
//so we classify links according to different patterns of burstiness and discuss table steadiness among all classes as follows.
0.根据不同的链路断路和表的可靠性对链路进行分类如下。
}
3.{
//With many complex factors affecting bursty packet loss distribution,
0.许多复杂因素会导致数据包突然丢失,
//each link might show a totally different characteristic of the tradeoff between delivery rate and latency.
0.每条链路会显示出交付率与延迟之间整体的分布特征。
//If the link usually suffers from long consecutive packet losses,
0.如果链路经常遭受长期的数据包丢失,
//then increasing allocated time does not improve delivery rate significantly.
0.那么增加分配时隙并不能显著提高交付率。
//On the other hand, if the link suffers from short bursty loss,
0.另一方面,如果链路遭受短时的突然中断,
//increasing the allocated time slots might enhance reliability.
0.增加分配时隙会加强可靠性。
//Here a bursty packet loss is represented by a substring with consecutive 0's in the delivery trace.
0.突发性的包丢失在交付记录中用连续的0来表示。
}
4.{
//Based on this intuition, we look at the bursty packet loss frequency and length on different links.
0.据此,可以观察到不同链路上的包丢失分布。
//We define average burst length as the average number of burstiness within one hour of the delivery trace.
0.将传输记录中每小时断路的平均个数记为平均断路长度。
//Next, we classify links into 4 classes by their burst length and burst frequency.
0.根据链路的中断时长和中断频率将链路分为4种。
//We pick links with 70% PRR as available links and use the median values as the threshold to divide these classes.
0.将交付率达到百分之70的链路作为可用链路,用中位数作为阈值来区分不同的链路。
//burst frequency per hour >=1157 is high-frequency links and average burst length >=2.57 as long burst length links.
0.每小时突发频率大于等于1157的是高频链路,平均断裂长度大于等于2.57的是长断裂链路。
//Table II is the distribution of these classes.
0.表2是链路种类的分布。
}
5.{
//If burst distribution is stable, the link reliability tables should be stable too,
0.如果断路分布稳定,则链路的可靠性表也应该是稳定的,
//given that the measurement is taken for a sufficient period to capture the distribution of burst.
0.测量方法需要足够的时间来捕获断路分布。
//To check table steadiness of each class,
0.为了检查每一类链路表的稳定性,
//we calculate the standard deviation of delivery of rate over different measurement days with the numbers of allocated time slots.
0.我们计算出不同分配时隙下的交付率的标准差。
//The standard deviation is calculated from a set of reliability tables with same measuring length but different measuring days.
0.标准差的计算源自于不同测量天数的相同测量长度可靠性表。
//For consistency, we use 3 days of a delivery trace as measuring length.
0.我们用3天的交付率作为测量长度。
//Therefore, for each link, there are 18 reliability tables and we calculate the standard deviation among them.
0.每个链路都有18个可靠性表,我们计算它们的标准差。
}
6.{
//Figure 3 shows the standard deviation given different number of allocated time slots in each class.
0.图3显示了每类链路中不同分配时隙下的标准差。
//The x-axis is the number of allocated time slots and y-axis is the average standard deviation of each class.
0.x坐标表示分配的时隙数,y坐标表示每种链路的平均标准差。
//Error bars showed the standard errors among links at the same class.
0.误差线表示相同种类中的链路的标准差。
//Generally, the delivery rate changes little with a high number of allocated time slots(>30).
0.一般而言,当分配时隙大于30时,交付率的变化很小。
//However, for HFLB and HFSB classes, the delivery rate is not stable with a low number of allocated time slots.
0.然而,对于高频链路,在少量的分配时隙下,链路交付率不稳定。
//We believe it is because links in these classes suffer burstiness frequently,
0.我们认为这是由于该类链路频繁地遭到断路。
//which affect burst distribution hugely and make link reliability diverse among different measuring days.
0.影响了链路的断裂分布,使不同时间链路的可靠性变得多样化。
//In sum,the standard deviation of the delivery rate among all classes is lower than 1%,
0.总之,所有类型的链路的交付率的标准差小于百分之1,
//which we believe is stable enough for most applications.
0.对于大多数应用来说足够稳定。
//For other applications with hard realtime guarantees,
0.对于其他高实时性的应用,
//we suggest using LFLB and LFSB links since these links show very low standard deviation(<0.1%),
0.我们建议选用低断裂频率的链路,因为这些链路的标准差非常低,
//which imply high consistency of the tradeoff between reliability and delay.
0.在可靠性和延迟方面具有很高的一致性。
}
7.{
//In long-term, the classes of links need to be updated according to table changes.
0.长期内,链路的种类需要根据表的变化进行更新。
//The update frequency is based on the environment of sensor network.
0.更新频率取决于传感器网络环境。
//According to our analysis,
0.根据分析,
//it is suitable to update classes once per month in our testbed.
0.在我们所用的测试平台中,每月更新一次分类较为合适。
//Such overhead is manageable for real world application.
0.这样的开销比较符合现实应用。
}
8.Fig3{
//The average standard deviation of delivery rate over different measuring days among all classes.
0.所有类型中不同测算天数内交付率的平均标准差。
}
}
}
4.IV{
//Scheduling Algorithm Design
0.设计调度算法
1.{
//In this section, we will describe how to use the link reliability tables to achieve end-to-end stream reliability requirements with minimum delay.
0.该节描述如何用链路可靠性表实现最小延迟下的端到端数据流可靠性需求。
//We start from three different application scenarios and we will discuss the algorithms for each scenario separately.
0.从三种不同的应用场景出发,讨论每个场景中的算法。
}
2.{
//Notice that our scheduling algorithms are focused on choosing routes for each stream rather than transmission time slots.
0.调度算法关注的是数据流的路由选择而不是传输时隙。
//Therefore, for the flexibility, we don't consider internal interference in second and third scenarios
0.为了灵活应用,不考虑场景2和场景3中的内部干扰
//because it is also relevant to transmission schedule and the protocol of sensor network.
0.因为这与传输调度和传感器网络协议相关。
//However, we also provide a scheduling algorithm which avoids internal interference in section IV-D3.
0.但我们依然设计了一个调度算法来避免第4节D3中提到的内部干扰。
}
3.A{
//Three streaming scenarios
0.三种流媒体场景
//To quantify the end-to-end latency, recall that one packet transmission will take a time slot.
0.为了量化端到端延迟,传输一个包需要一个时隙。
//The exact length of a time slot depends on the implementation of the protocol and here we count the number of slots taken from source to destination.
0.确切的时隙长度取决于协议的实现,时隙个数采集自源节点到目的节点。
//Depending on applications, we may have three streaming scenarios,
0.根据场景,我们有三种流媒体场景
//single stream with multiple batches and multiple streams with multiple batches.
0.多批数据的单流,多批数据的多流。
}
4.Fig4{
//One stream transmitting one batch with latency bound=70.
0.一批数据的一个传输流延迟为70。
//In the worst case, the only batch may require 70 packet transmissions along the route to reach node t.
0.最坏情况下,这批数据需要70个包沿着路径抵达节点t。
}
5.{
//Single Stream with a Single Batch.
0.携带单批数据的单流。
//In this scenario, we assume that there is only a single data stream in the network
0.该场景中,假设网络中只有一个数据流
//and the stream has only a single batch of packets.
0.且流中只有一批包。
//Figure 4 illustrates one example.
0.图4说明了一个实例。
//Suppose in this example we need to allocate 20,30 and 20 time slots respectively along the route to achieving the reliability requirement.
0.假设该实例中沿着路由分别需要分配20,30,和20时隙来实现可靠性需求。
//Therefore the worst case end-to-end latency for the single batch to successfully arrive from s to t with the given reliability requirement is 70,
0.因此,最坏情况下,在给定可靠性需求的条件下,该单批数据从s成功抵达t的端到端延迟是70,
//by adding up the allocated slots along the route.
0.通过沿途路由分配的时隙相加求得。
}
6.{
//In general, each link may have multiple entries in the link reliability table.
0.每条链路的可靠性表中一般都有多个条目。
//Thus we want to find among all routes that satisfy the given reliability requirement the one with minimum total latency.
0.因此,我们想从所有满足给定可靠性需求的路由中找到延迟最小的。
//This problem has similarity to finding the shortest path when the weight of a link is taken as the number of allocated time slots of the link.
0.该问题与寻找最短路径问题形似,其中链路的权重为分配给每个链路的时隙。
//But the challenge is to consider the reliability of the route as well.
0.挑战是需要同时考虑路由的可靠性。
//Later we show how to use a dynamic programming algorithm to solve it optimally.
0.然后我们展示如何用一种动态编程算法来最优地解决这个问题。
}
7.Fig5{
//One stream transmitting multiple batches with latency bound=30.
0.多批次单流的延迟为30
//In the worst case, the node t may need to wait for 30 packet transmissions to receive a new batch.
0.在最坏的情况下,节点t可能需要等待30包传输才能接受新的批次。
}
8.{
//Single Stream with Multiple Batches.
0.多批次单流。
//In this case, we have a single data stream with multiple(possibly infinite)batches of packets.
0.该情形下,单个数据流中有多批次的包。
//For the example in Figure 5, the node s generates and sends out batches continuously.
0.如图5中的实例,节点s持续生成并发送批数据。
//Now we have a pipeline of packets traveling along the route.
0.现在沿着该路由有一批包在传输。
//While the first batch is being transmitted on the second link the second batch can be transmitted along the first link
0.当第一批次在第二链路传递时,第二批次可以沿着第一链路传递
//here we assume it is the full-duplex sensor network for simplicity.
0.为简单起见,假设传感器网络为全双工模式。
//It will still take 70 time slots for the first batch to arrive at the destination.
0.第一批次依然需要70时隙才能抵达目的地。
//But after that, every 30 slots the destination receives another batch.
0.但随后,每隔30时隙,目的地会接收到另一个批次。
//In this case, the natural latency measurement is the throughput of the flow
0.这种情况下,自然延迟的测量为流的吞吐量。
//the number of time slots for receiving an additional batch,
0.接收额外批次的时隙个数
//which is the bottleneck latency along the route.
0.这是该路由的延迟瓶颈。
//Again we find the route with maximum throughput among all routes that meet the reliability requirement.
0.我们再次从所有满足可靠性需求的路由中找到最大吞吐量的路由。
//(minimum bottleneck latency)
0.最小延迟瓶颈
}
9.Fig6{
//Two streams transmitting multiple batches with latency bound=30+20=50.
0.多批次双流的延迟为50
//In the worst case, the node t may need to wait for 50 packet transmissions to receive a new batch from s since the two streams take turn to use the shared link.
0.在最坏情况下,节点t0或t1需要等待50包传输才能接收来自节点s0或s1的新批次,由于两个流需要轮换地使用共享链路。
}
10.{
//Multiple Streams with Multiple Batches.
0.多批次多流
//In Figure 6, one of the links is shared between two streams,
0.图6中,有一条链路被两个流共享,
//requesting 30 and 20 slots respectively for each batch.
0.每批次分别需要30和20时隙。
//To meet the reliability guarantee for the shared link we run round-robin on all the data streams through it.
0.为了满足共享链路的可靠性保证,在所有数据流通过时执行轮换机制。
//Therefore, the latency for a batch to get through the shared link in this example is 50=30+20 as the link is shared by two streams.
0.因此,本例中批次通过共享链路的延迟是50,被两个流所共享。
//For each data stream, we measure its throughput as the bottleneck latency among the links on its route.
0.对于每条数据流,共享链路的吞吐量成为该路由的所有链路的延迟瓶颈。
//We look for routing algorithms such that the maximum throughput for all streams is minimized.
0.从所有流中寻找最大吞吐量的路由算法。
}
11.B{
//Single Stream with a Single Batch
0.单批单流
//As explained earlier, finding a route to minimize latency for a single stream with a single batch is similar to finding the shortest path.
0.如前所述,为单批单流寻找最小延迟路由与寻找最短路径相似。
//We adapt and modify the Floyd-Warshall algorithm which uses a dynamic programming approach.
0.我们采用改进的弗洛伊德算法,使用动态编程方法。
//The main challenge is to handle the reliability requirement.
0.处理可靠性需求是主要挑战。
//In particular, each link has multiple entries in its reliability table and we need to select and combine the link reliability to meet the end-to-end reliability requirement.
0.况且每条链路的可靠性表中都有多个条目,我们需要选择并组合满足端到端可靠性需求的链路。
//This difference is reflected in how we combine the subproblems in the DP formulation.
0.在DP方程中的子问题组中反映出这一差异。
//For a high-level idea, we are storing and updating a table for each pair of nodes x,y to record the tradeoff between reliability and allocated time slot for source s and destination t.
0.为每对节点x,y存储并更新表,为源节点s和目的节点t记录可靠性与世袭之间的权衡值。
}
12.{
//The details are provided in the Algorithm 1.
0.在算法1中有详细的描述。
//For every pair of source x and destination y,
0.对于每对源x和目的y
//we maintain a table Txy(S) which records the reliability tradeoff of a path from x to y using intermediate nodes only from the set S.
0.维护一个表Txy用来记录从x到y的路径的可靠性权衡,中间节点都源自集合S。
//Initially Txy(0) is obtained directly from the data trace[Line 4].
0.直接用数据记录来初始化表。
//The final solution can be found in table Tst(V),
0.最终的结果集可以在表Tst中查到,
//where V is the set of all vertices.
0.其中V是所有节点的集合。
//The program builds upon subproblems:
0.程序建立在如下子问题之上:
//each time we introduce a new node into the set S at a time[Line 6],
0.每次向集合S中添加一个新的节点,
//we incrementally update the tables[Line 9] until S=V[Line 12].
0.递增式地更新表直到集合S等于集合V。
//The value l of the entry (u,l)belong to Tst(V) is the minimum end-to-end latency to achieve reliability requirement for the stream.
0.l的值是数据流满足可靠性需求的最小端到端延迟。
//Note that the relative frequency of transmission is not applicable to the single stream case
0.相对传输频率并不适用于单数据流的情况
//so the parameter bi is not used in the algorithm.
0.算法中没有使用参数bi。
}
13.Fig7{
//Two operations to update node-to-node tables.
0.更新节点间表的两个操作。
//after concatenating Txv(S) and Tvy(S), we can compare it with Txy(S).
0.连接了两个表后,可以和另一个表作比较。
}
14.{
//since we are only working on the tables parameterized by p and e.
0.只关注表中的两个参数。
//the notation for ... is abbreviated to ... whenever the context is clear.
0.为使内容清晰,将表...缩写为...
//the remaining question is to update the table.
0.剩下的问题就是更新表。
//for doing so, we need to introduce two operations, compare and concatenate.
0.为此,需要介绍两个操作,比较和连接。
}
15.Concatenate{
//the concatenate operation is to combine the tradeoff information from two tables ... and ... for set s.
0.连接操作将两个表的权衡信息组合
//the end node of the first table is the start node of the other table.
0.第一个表的末端节点是另一个表的起始节点
//we can concatenate the tables for xv and vy to form a new table denoted as ... which considers paths going through v.
0.将两个表连接起来,用...来表示,代表路径经过节点v。
//taking figure 7 as an example.
0.如图7所示。
}
16.{
//for each entry ... the allocated time slot ... is equal to the minimum sum of allocated time slots from all the combinations between ... and ... satisfying ...
0.对于每个条目,分配的时隙等于所有组合的最小分配时隙和
//where ... naively, to calculate a single new entry ...
0.为了计算一个新的条目
//it takes ... where k is the number of entries in each table, to evaluate all the combinations of .. and ..
0.每个表中含k个条目,则复杂度为k的平方,为了评估所有的组合
//however notice that .. is monotonically increasing when .. is increasing, so as ..
0.当..递增时,..是单调递增的,..也是。
//therefore, if we fix the entry's index at p in ... when calculating the entry j in ..
0.当计算表..中的条目j时如果修复条目的索引。
//corresponding相应的 computable
}
//Compare
{
//derive推导出 revise修改 estimate估计 perform执行 procedure过程
}
//C.Single Stream with Multiple Batches
{
//retain保留
}
//D.Multiple Stream with Multiple Batches
{
//bound范围 fulfill完成 denote表示 determing测定方法
}
//1)NP-hardness Proof:
{
//disjoint不相交的 return返回 arbitrary任意的 reduction约减
}
//2)Approximation Algorithm:
{
//arrange安排 clarity清晰 appendix附录 inspire产生 exponential指数
//constant常数 assign分配 construct构造 congested堵塞 original原始的
//correct正确 scale比例 assignment分配
}
//2)Scheduling Algorithm:
{
//fashion样式 inherit继承 determine确定 greedily贪婪地 iteratively迭代
//heuristic启发式 greedy贪心 objective目标 remain保留
}
//Algorithm 2 Assign-route
{
//previous以前是
}
}
5.Ealuation{
//evaluation评估 separate单独的 granularity粒度 empirical经验的 utilize利用
//congestion拥塞 course进程 particularly特别scalability扩展性
}
6//Related Work
{
//heuristic启发式 capacity容量 unicast单播 conservation保护 favorable有利的
}
7.Conclusion
8.Refferences
}