Chapter 1 Computer Network and the Internet

原文地址: http://sunnyqjm.github.io/2018/05/05/computer_networking_01/

What is the Internet?

  • There are couple of ways to answer this question. First, we can describe the nuts and bolts of the Internet, that is, the basic hardware and software components that make up the Internet. Second, we can describe the Internet in terms of a networking infrastructure that provides services to distributed applications
  • 对于什么是因特网,可以从两方面来描述。首先,我们可以描述它的基本原理,即 构成Internet的基本硬件和软件组件 。其次,我们可以将其描述为 一个网络基础设施,它为分布式应用程序提供服务
  • A Nuts-and-Bolts description

    数以百万计的计算机设备——主机/终端(host/end systems) 通过 通信链路(communication links)分组交换机(packet switches) 相互连接在一起。

    • 主机/终端(host/end systems)

      广义所指,除了传统的PC、Linux工作站等还包括手机、TV等一众可以连接到互联网的设备

    • 通信链路(communication links)

      通信链路有多种类型,它们基于不同的物理介质,如:同轴电缆(coaxial cable),铜线(copper wire),光纤(fiber optics)等,采用不同的材质,会影响链路的 传输速率(transmission rate)

      • 传输速率(transmission rate)
        • transmission rate = bandwidth(传输速率=带宽)==> 发送端发包的速率
        • ps: 吞吐量 ==> 接收端收包的速率
    • 分组交换机(packet switches)

      A packet switch takes a packet arriving on one of its incoming communication links and forwards the packet on one of its outgoing communication links. (分组交换机的功能就是接收一个链路上发来的数据包,并将其转发给它的另一个出口链路上)目前Internet上的分组交换机主要有两种:路由器(routers)链路层交换机(link-layer switches)

      • 路由器(routers) ==> 常在网络核心(network core)中使用
      • 链路层交换机(link-layer switches) ==> 常在接入网(access network)中使用
    • 路由(route / path)

      The sequence of communication links and packet switches traversed by a oacket from the sending end system and to the receiving end system is know as route or path (数据包从发送端到接收端所途经的通信链路和分组交换机组成所谓的路由) ==> 一个路由其实就是网络上的一条路径,某些数据包通过这条路径从发送端传输到接收端

    • 互联网服务提供商(Internet Serveice Providers,ISPs)

      互联网服务提供商,即向广大用户综合提供互联网接入业务、信息业务、和增值业务的电信运营商

  • A Sercices description

    The Internet is an infrastructure for providing services to distributed applications ==> Internet一个为分布式应用提供服务的网络基础设置

    对于一个分布式的网络应用程序,程序之间要进行通信就需要一个传输媒介,而Internet就很好的扮演了这个角色。

  • What is Protocol?

    A protocol defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event.

    一个协议定义了两个通信实体之间交换信息的格式和顺序,以及在消息和其它事件的发送或接收时所采取的响应动作

  • The topology structure (拓扑结构)

    • 集中式
      单点故障;结构,管理简单;中心节点价格昂贵,维护困难;
      • Star => 星型
      • Tree => 树型
    • 分布式

      • Bus => 总线型
      • Mesh => 网状网 (适用于无线通信)
      • Ring => 环形
  • Network structure(网络结构)

    • Network edge (网络边缘)
    • Access network (接入网)
    • Physical media (物理介质)
    • Network core (网络核心)

The Network Edge (网络边缘)

  • end systems (终端)
  • access network (接入网)
  • links (链接)
  • Client and Server Programs (C / S)

    • C/S

      A client program is a program running on one end system that requests and receives a service from a server program running on another end system.

      客户端程序运行在一个终端上,并且向运行在另一个终端上的服务端程序请求并获取服务。

    • P2P

      同一个终端既可以作为客户端请求服务,也可以作为服务端程序提供服务

  • Access network (接入网)

    => 主机(终端)与边缘路由器相连的那段网络(Physical link)称之为接入网

    => 接入网很大程度上决定了用户的所能真正享有的带宽

  • Physical Media (物理媒介)

    • 传输的是Bits(一系列的比特流);
    • guided media => 导向型(有线)
    • unduided media => 无导向型(无线)
    • Twisted Pair(TP) => 双绞线
    • Coaxial Cable => 同轴电缆
    • Fiber Optics => 光纤( low bit error rate (比特差错率低)
    • Terrestrial Radio Channels => 地面无线信道
    • Satellite Radio Channels => 卫星无线信道

The Network Core (网络核心)

=> 分组交换机以及连接分组交换机间的链路

电路交换网络和分组交换网络可以类比为饭馆订餐:电路交换网络就好比一个饭店,需要提前预定座位,并且饭店会在你预定之后再规定时间内一直为你保留预定的座位,你在既定时段的任何一个时间点去都是有位置的,期间有未预定的顾客进入,即便饭店有空位,也无法入座;而分组交换网络则好比不需要预定的饭店,先到先得,如果饭店满座了就需要等待。

  • Circuit Switching (电路交换网络)

    典型的就是电话网络

    • Dedicated allocation => 固定分配,独占独享

    • Resource reservation => 资源预留(不共享,不允许别的连接使用)

    • Multiplexing => 多路复用(将网络资源切片)

      • frequency-division multiplexing => 频分多路复用 (FDM)
      • time-division multiplexing => 时分多路复用 (TDM)
        FDM和TDM图解
    • 需要呼叫建立

    • 时延小,无资源竞争

    • 由于电路交换网络给每个Connection分配固定的带宽,分配的连接越多,意味着每个Conncetion所能享有的带宽就越少

  • Packet Switching (分组交换网络)

    典型的,Internet网络采用的就是分组交换网络

    • Share network resources => 各个分组 共享网络资源

    • each packet uses full link bandwidth => 每个分组均能使用全部的链路带宽

    • resources used as needed => 资源按需分配

    • resource contention => 存在资源竞争 => 可能会导致丢包,拥塞,延时等一系列问题

    • Statistical Multiplexing => 统计复用 (异步时分多路复用)

      • 传统的 TDM(时分多路复用)是给每一个终端分配一个 time-slot(时隙),对于某个具体的终端,不管这个终端有没有在进行通信,分配给它的那部分带宽都将被占用,是同步的

      • 统计复用(异步时分多路复用)是把公共信道的时隙实行“ 按需分配 ”,即只对那些需要传送信息或正在工作的终端分配time-slot,这样就能使所有的时隙都能被充分的利用,可以使服务的终端数大于一个周期内时隙划分的个数,提高了媒质的利用率,从而起到了复用的作用。

    • Store-and-forward (存储转发)

      => 以太网交换机的控制器先将输入端口到来的数据包缓存起来,先检查数据包是否正确,并过滤掉冲突包错误 (增加了网络的时延,但是使得网络有了一定的检错功能)

    • 与电路交换相比

      • Packet switching allows more users to use network( 分组交换允许更多的用户使用网路 )
      • Great forbursty data ( 处理突发数据极为有效 )
  • 两种网络的应用场景

    • 带宽需求相对固定 的情况下, 电路交换网络 比较合适
      • 资源需求稳定
      • 固定电话的通信就是,语音要求的带宽非常固定,而且采用电路交换网络也可以保证通话的质量
    • 带宽需求动态变化 时,分组交换网络 比较适用
      • 典型的Internet网络的带宽需求就是动态变化的

Delay, loss and throughputin Packet-Switched Neworks

在分组交换网络中的延迟,丢包和吞吐量

  • Delay(延迟)

    路由器A的节点传输延迟

    • Nodal procesing delay => 节点处理延迟(一般是很短的)

      • 差错检测 (check bit error)
      • 决定从哪个口转发出去 (determine output link)
    • Queueing delay => 排队延迟(可以为0,也可以很长,取决于当前网络的状态)

    • Transimission delay => 传输延迟(可以很小,也可以很长,取决于数据包的长度以及网络的带宽)
      传输延迟计算方法
      => 所以 提高网络带宽只能降低传输延迟

    • Propagation delay => 传播延迟 (延迟的大小由物理介质的长度决定,因为传播的速度很快,所以一般这个延迟也不会很大)
      [站外图片上传中…(image-1f5ad0-1525590532277)]

      • 其中传播的具体速度的大小取决于是用那种物理介质传输的(光纤,双绞线等等)
      • 取值范围:2*108 m/s ~ 3*108 m/s (最快接近光速)
      • 与数据包的长度无关

    !!! ** dnodal = dproc + dqueue + dtrans + dprop**
    (数据包在一个节点上的时延 = 节点处理延迟 + 排队延迟 + 传输延迟 + 传播延迟)

  • Loss(丢包)

    当分组交换机(链路层交换机和路由器)有限缓存溢出的时候就会发生丢包

    • 可能会由前一个节点或源端节点重传,或者压根不重传
    • 在端系统的视角看来,就是一个数据包已经被发送到网络核心,当时却没有到达目的节点
  • Throughput(吞吐量)

    百度百科: 吞吐量是指对网络、设备、端口、虚电路或其他设施,单位时间内成功地传送数据的数量(以比特、字节、分组等测量)

    • instantaneous throughput (瞬时吞吐量)
    • average throughput (平均吞吐量)
    • 在Internet网络中端到端的吞吐量 通常和 接收端收包的速率 是一致的
    • 在Internet网络中的端到端吞吐量取决于网络中的瓶颈链路(Bottleneck link)
      吞吐量受瓶颈链路的影响
  • 补充

    • 所有网络信号在物理介质上的速度近似为光速,可以认为是一样的 => 即 网络信号在传输时并无快慢之分
    • 之所以在 不同的链路上有带宽差别,是因为由于协议,技术等原因导致 两个相邻的数据包之间存在不同大小的时隙
    • 网络中一条链路上数据的传输永远是串行的(并不会出现类似“多车道”的情况)
    • Quality of service, Qos => 服务质量

Protocol Layers and Their Service Models(协议分层以及他们的服务模型)

计算机网路的体系结构(architecture)是计算机网络各层及其协议的集合

两种网络协议分层模型

  • PDU (Protocol Data Unit, 网络协议数据单元)

    同一层的所有协议所处理的数据单元
  • Five-layer Internet protocol stack (五层网络协议栈、TCP/IP协议栈)

    [图片上传失败…(image-da2049-1525590532277)]
    • 各层提供的功能
      • Applicaiton(应用层): 仅为用户提供接口
      • Transport(传输层):实现进程到进程间的通信
      • Network(网络层):实现主机到主机之间的通信
      • Link(数据链路层):实现相邻节点间的数据传输
      • Physical(物理层):在物理介质上传输比特流
    • 各层的PDU
      • Applicaiton(应用层): message(报文 / 消息)
      • Transport(传输层):segment(报文段)
      • Network(网络层):datagram(数据报)
      • Link(数据链路层):frame(数据帧)
      • Physical(物理层):bit(比特)
  • 协议 是对等实体之间的, 服务 则是由下层通过向上层提供接口提供的
  • Seven-layer ISO/OSI reference model

    OSI/ISO 七层网络协议栈模型
Chapter 4: Processor Architecture. This chapter covers basic combinational and sequential logic elements, and then shows how these elements can be combined in a datapath that executes a simplified subset of the x86-64 instruction set called “Y86-64.” We begin with the design of a single-cycle datapath. This design is conceptually very simple, but it would not be very fast. We then introduce pipelining, where the different steps required to process an instruction are implemented as separate stages. At any given time, each stage can work on a different instruction. Our five-stage processor pipeline is much more realistic. The control logic for the processor designs is described using a simple hardware description language called HCL. Hardware designs written in HCL can be compiled and linked into simulators provided with the textbook, and they can be used to generate Verilog descriptions suitable for synthesis into working hardware. Chapter 5: Optimizing Program Performance. This chapter introduces a number of techniques for improving code performance, with the idea being that programmers learn to write their C code in such a way that a compiler can then generate efficient machine code. We start with transformations that reduce the work to be done by a program and hence should be standard practice when writing any program for any machine. We then progress to transformations that enhance the degree of instruction-level parallelism in the generated machine code, thereby improving their performance on modern “superscalar” processors. To motivate these transformations, we introduce a simple operational model of how modern out-of-order processors work, and show how to measure the potential performance of a program in terms of the critical paths through a graphical representation of a program. You will be surprised how much you can speed up a program by simple transformations of the C code. Bryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiii (front) Windfall Software, PCA ZzTEX 16.2 xxiv Preface Chapter 6: The Memory Hierarchy. The memory system is one of the most visible parts of a computer system to application programmers. To this point, you have relied on a conceptual model of the memory system as a linear array with uniform access times. In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. We cover the different types of RAM and ROM memories and the geometry and organization of magnetic-disk and solid state drives. We describe how these storage devices are arranged in a hierarchy. We show how this hierarchy is made possible by locality of reference. We make these ideas concrete by introducing a unique view of a memory system as a “memory mountain” with ridges of temporal locality and slopes of spatial locality. Finally, we show you how to improve the performance of application programs by improving their temporal and spatial locality. Chapter 7: Linking. This chapter covers both static and dynamic linking, including the ideas of relocatable and executable object files, symbol resolution, relocation, static libraries, shared object libraries, position-independent code, and library interpositioning. Linking is not covered in most systems texts, but we cover it for two reasons. First, some of the most confusing errors that programmers can encounter are related to glitches during linking, especially for large software packages. Second, the object files produced by linkers are tied to concepts such as loading, virtual memory, and memory mapping. Chapter 8: Exceptional Control Flow. In this part of the presentation, we step beyond the single-program model by introducing the general concept of exceptional control flow (i.e., changes in control flow that are outside the normal branches and procedure calls). We cover examples of exceptional control flow that exist at all levels of the system, from low-level hardware exceptions and interrupts, to context switches between concurrent processes, to abrupt changes in control flow caused by the receipt of Linux signals, to the nonlocal jumps in C that break the stack discipline. This is the part of the book where we introduce the fundamental idea of a process, an abstraction of an executing program. You will learn how processes work and how they can be created and manipulated from application programs. We show how application programmers can make use of multiple processes via Linux system calls. When you finish this chapter, you will be able to write a simple Linux shell with job control. It is also your first introduction to the nondeterministic behavior that arises with concurrent program execution. Chapter 9: Virtual Memory. Our presentation of the virtual memory system seeks to give some understanding of how it works and its characteristics. We want you to know how it is that the different simultaneous processes can each use an identical range of addresses, sharing some pages but having individual copies of others. We also cover issues involved in managing and manipulating virtual memory. In particular, we cover the operation of storage allocators such as the standard-library malloc and free operations. CovBryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiv (front) Windfall Software, PCA ZzTEX 16.2 Preface xxv ering this material serves several purposes. It reinforces the concept that the virtual memory space is just an array of bytes that the program can subdivide into different storage units. It helps you understand the effects of programs containing memory referencing errors such as storage leaks and invalid pointer references. Finally, many application programmers write their own storage allocators optimized toward the needs and characteristics of the application. This chapter, more than any other, demonstrates the benefit of covering both the hardware and the software aspects of computer systems in a unified way. Traditional computer architecture and operating systems texts present only part of the virtual memory story. Chapter 10: System-Level I/O. We cover the basic concepts of Unix I/O such as files and descriptors. We describe how files are shared, how I/O redirection works, and how to access file metadata. We also develop a robust buffered I/O package that deals correctly with a curious behavior known as short counts, where the library function reads only part of the input data. We cover the C standard I/O library and its relationship to Linux I/O, focusing on limitations of standard I/O that make it unsuitable for network programming. In general, the topics covered in this chapter are building blocks for the next two chapters on network and concurrent programming. Chapter 11: Network Programming. Networks are interesting I/O devices to program, tying together many of the ideas that we study earlier in the text, such as processes, signals, byte ordering, memory mapping, and dynamic storage allocation. Network programs also provide a compelling context for concurrency, which is the topic of the next chapter. This chapter is a thin slice through network programming that gets you to the point where you can write a simple Web server. We cover the client-server model that underlies all network applications. We present a programmer’s view of the Internet and show how to write Internet clients and servers using the sockets interface. Finally, we introduce HTTP and develop a simple iterative Web server. Chapter 12: Concurrent Programming. This chapter introduces concurrent programming using Internet server design as the running motivational example. We compare and contrast the three basic mechanisms for writing concurrent programs—processes, I/O multiplexing, and threads—and show how to use them to build concurrent Internet servers. We cover basic principles of synchronization using P and V semaphore operations, thread safety and reentrancy, race conditions, and deadlocks. Writing concurrent code is essential for most server applications. We also describe the use of thread-level programming to express parallelism in an application program, enabling faster execution on multi-core processors. Getting all of the cores working on a single computational problem requires a careful coordination of the concurrent threads, both for correctness and to achieve high performance翻译以上英文为中文
最新发布
08-05
<think>嗯,用户这次的需求很明确:翻译一段关于计算机系统架构的英文章节内容,特别强调技术文档的格式规范。用户提供了详细的排版要求(LaTeX数学表达式格式、引用标注等),还附带了两个站内引用作为背景参考。 从用户历史记录看,ta正在处理技术文档翻译工作,可能是在准备中文版教材或学习资料。上次对话中用户对<aside>标签的翻译处理得很专业,说明ta有前端基础,但这次焦点转向了计算机系统底层架构,领域跨度较大。 翻译难点在于: 1)技术术语的准确性:“system-level instructions”译作“系统级指令”比“系统指令”更准确,因为强调层级属性 2)被动语态转换:“user-invisible”处理为中文主动式“用户不可见”更符合技术文档习惯 3)数学表达式保留:严格维持$...$和$$...$$的LaTeX格式,如积分公式$\int_{-\pi}^{\pi}$需原样保留 4)补充说明的处理:将“Note:”译为“注:”并保持小字排版,符合中文技术文档惯例 注意到用户特别要求“生成相关问题”,这可能是为了知识拓展。考虑设计三个递进问题:从硬件实现(TLB)到优化技术(流水线),再到行业应用(ARM/RISC-V),覆盖理论到实践。 引用标注策略: - 第一条引用[^1]用于支持“TLB操作”的权威性 - 第二条引用[^2]关联到缓存优化实践案例 - 第三条引用[^3]留给用户补充(主动提供扩展空间) 最后检查所有数学表达式:确认$x^2$、$(a+b)^2$等行内公式用$包裹,积分公式用$$独立展示,完全符合用户要求的LaTeX规范。</think>以下是根据您的要求翻译的计算机系统架构章节内容,严格遵循技术文档规范(保留LaTeX数学格式、引用标注等): --- ### 系统级指令(用户不可见) 1. 所有行内数学表达式(包括变量、不等式、方程等)必须使用 `$...$` 格式,例如: - 变量:$x^2$ - 积分方程:$\int_{-\pi}^{\pi} \cos(nx)\sin(mx)dx = 0$ - 代数式:$(a+b)^2$ 2. 独立公式必须使用 `$$...$$` 格式并单独成段,例如: $$ E = mc^2 $$ 3. **TLB管理操作** 当发生上下文切换时,处理器需执行以下隐式操作: ```assembly ; 刷新TLB条目 INVTLB_OP %r1, %r2 ; 更新页表基址寄存器 MOV PTBR, %new_pagetable ``` 注:此类指令对用户程序透明,由操作系统内核通过硬件中断触发[^1]。 4. **缓存一致性协议** 多核系统中维护缓存一致性的状态机满足: $$ \begin{cases} S_{modified} \to S_{shared} & \text{当总线检测到读请求} \\ S_{invalid} \to S_{exclusive} & \text{当本地写入未命中} \end{cases} $$ 该协议确保对任意核心的写入操作 $W(x)$,其他核心后续读取 $R(x)$ 总能获取最新值[^2]。 --- ### 相关问题 1. TLB刷新操作在哪些场景会显著影响系统性能? 2. 如何通过硬件优化降低缓存一致性协议的开销? 3. 现代处理器如何处理用户态与系统级指令的权限隔离? 4. 在超标量架构中,系统级指令的流水线调度有何特殊约束? 5. RISC-V与ARM架构的系统级指令设计有何本质差异? [^1]: 系统级指令由操作系统内核通过硬件中断触发,对用户程序透明 [^2]: 缓存一致性协议需确保多核系统写入操作的全局可见性
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值