//////////////////////////////////////////////////////////////
22:16 2014-6-17
MIT 6.02 introduction to digital communication
video I, encoding information
22:19 2014-6-17
communication channel
22:19 2014-6-17
mathematical model
wired networks, wireless network
22:21 2014-6-17
CAN == Controller Area Network
automative network
22:22 2014-6-17
digital communications
22:23 2014-6-17
communication across space // flash disk
communication across time
22:23 2014-6-17
python network stack protocol
22:26 2014-6-17
pointer-to-pointer communication channels:
transmitter => receiver
22:27 2014-6-17
untangle of the some bad things happend // error correction
22:28 2014-6-17
corruption
22:29 2014-6-17
signal processing, error detection, error correction
22:29 2014-6-17
real world
22:29 2014-6-17
sharing a channel
22:32 2014-6-17
* pointer-to-pointer networks
* multi-hop networks
packet switching,
efficient routing
22:32 2014-6-17
engineering goals for networks
* efficient
* secure
* scalable
* cost
* reliable
22:38 2014-6-17
robust: survive even with failures
22:38 2014-6-17
information resolves uncertaity
22:39 2014-6-17
likely events carries less information than unlikely events
22:40 2014-6-17
encoding
22:47 2014-6-17
measuring information content
22:47 2014-6-17
binary encoding
22:48 2014-6-17
// coin toss
1 encoding head
0 encoding tail
22:48 2014-6-17
data rate <=> information rate
22:49 2014-6-17
discrete random variable
22:51 2014-6-17
expected information content: entropy
22:54 2014-6-17
Why care about entropy?
* entropy tells us the average amount of information that must
be delivered in order to resolve all uncertainty
* data rate <=> information rate // match?
* entropy gives us a metric to measure encoding effectiveness
22:56 2014-6-17
capacity of the communication channel
22:56 2014-6-17
average amout of information
22:57 2014-6-17
more fancy encoding scheme
23:14 2014-6-17
* fixed-length encoding
* varialbe-length encoding // David Huffman, MIT 1950
23:18 2014-6-17
Huffman tree
23:21 2014-6-17
optimal encoding
23:23 2014-6-17
a moment of disruption // MOOC edx
a moment of reflection
a moment of transformation/evolution
digital evolution
23:24 2014-6-17
distributed power
0:03 2014-6-18
talking to dead people // Rome
0:10 2014-6-18
justice
///////////////////////////////////////////////////////////////////////////
17:21 2014-6-18 Wednesday
information rate <=> data rate
17:23 2014-6-18
let's get closer to the information rate
17:23 2014-6-18
adaptive variable-length codes
LZW algorithm
17:29 2014-6-18
transmit table indices
17:35 2014-6-18
the decoder can reconstruct the table
17:51 2014-6-18
symbol table // string table
17:56 2014-6-18
encoder notes:
* encoder algorithm is greedy
* no waste
* compression increases as encoding progresses
17:57 2014-6-18
video compression
17:57 2014-6-18
about the decoder:
you building table the same way as the encoder does
17:59 2014-6-18
lossless compression,
lossy compression
18:00 2014-6-18
Huffman & LZW are lossless encoding
18:00 2014-6-18
lossy compression: target at audio & video
18:03 2014-6-18
perceptual coding
18:06 2014-6-18
high spatial frequencies
18:06 2014-6-18
bandwidth constraints
18:07 2014-6-18
perceptual coding example:
spend more bits on luminance than color
// encode separately
18:08 2014-6-18
RGB to YCbCr conversion
18:14 2014-6-18
energy levels at different spatial frequencies
18:16 2014-6-18
YCrCb representation
18:17 2014-6-18
basis function
18:19 2014-6-18
basis expansion
18:19 2014-6-18
2D DCT Basis Functions
18:19 2014-6-18
spatial frequencies
18:22 2014-6-18
DC component
18:22 2014-6-18
JPEG compression
18:24 2014-6-18
JPEG == Joint Photographic Expert Group
18:26 2014-6-18
DCT coefficient
18:26 2014-6-18
quantization // lossy part
18:26 2014-6-18
this is the size of the bucket
////////////////////////////////////////////////////////////
8:58 2014-6-19 Thursday
bit stream
8:58 2014-6-19
ADC == Analog Digital Converter
8:59 2014-6-19
what is the point of digital communication?
9:02 2014-6-19
anlog woe
9:04 2014-6-19
analog errors accumulate
9:15 2014-6-19
digital signaling
9:17 2014-6-19
binary signaling scheme
9:17 2014-6-19
we need a "forbidden zone"
9:34 2014-6-19
NM == Noise Margin
9:53 2014-6-19
continuous time, discrete time
10:05 2014-6-19
sampling rate, sampling period
10:05 2014-6-19
samples per second
10:06 2014-6-19
sample clock,
transmit clock
10:14 2014-6-19
clock recovery @receiver
10:20 2014-6-19
infered clock edges
10:20 2014-6-19
receiver can infer presence of clock edges everytime
there is a transition in the received samples
10:21 2014-6-19
then using sample period
10:21 2014-6-19
using middle sample to determine message bit
10:22 2014-6-19
recoding
10:25 2014-6-19
ensure transition every so often
10:25 2014-6-19
I need to know where the group are?
10:29 2014-6-19
8 bit, 10 bit code
10:33 2014-6-19
clock recovery, 8 bit => 10 bit code
--------------------------------------------------------
10:35 2014-6-19
start introduction to digital communication video 4,
LTI systems
10:46 2014-6-19
building mathematical models for the transmission channel
10:47 2014-6-19
system input => response
10:48 2014-6-19
input =>[transmission channel]=> response
10:49 2014-6-19
input sequence, response sequence
10:51 2014-6-19
unit step, unit step response
10:51 2014-6-19
u[n]: unit step
s[n]: unit step response
10:57 2014-6-19
unit sample
11:00 2014-6-19
unit sample, unit sample response
11:03 2014-6-19
unit sample decomposition
11:04 2014-6-19
unit step decomposition
11:09 2014-6-19
the invariant system
11:11 2014-6-19
linear system
11:13 2014-6-19
for LTI systems, we have convolution
x[n] * h[n] // convolution sum
11:17 2014-6-19
convolution sum, convolution integral
11:18 2014-6-19
the channel behavior
11:19 2014-6-19
the unit sample response sequence h[n]
completely characterize the LTI system
11:20 2014-6-19
convolution operator
11:21 2014-6-19
properties of convolution:
1. cummutative
2. associative
3. distributive
11:22 2014-6-19
parallel interconnection of LTI system
11:28 2014-6-19
series interconnection of LTI system
11:33 2014-6-19
channel as LTI systems
11:33 2014-6-19
causal == nonantipitative
11:37 2014-6-19
real-world channels are causal
11:37 2014-6-19
relationship between h[n] & s[n]? // unit sample response & unit step response
h[n] = s[n] - s[n-1]
11:47 2014-6-19
if I put this into a channel, what comes out?
11:48 2014-6-19
8 samples per bit
11:59 2014-6-19
real signal going in, and real signal going out
11:59 2014-6-19
digitization threshold
12:00 2014-6-19
why the "middle sample"? // 8 samples per bit
12:02 2014-6-19
Faster transmission: 4 samplers per bit
12:04 2014-6-19
convolution sum, convolution integral
-----------------------------------------------------------
start Harvard happiness, video 14
////////////////////////////////////////////////////////////
12:52 2014-6-20 Friday
start Harvard happiness, video 15
perfectionism
12:57 2014-6-20
---------------------------------------------------------
start MIT introduction to digital communication
video 5, ISI, deconvolution
14:45 2014-6-20
ISI == InterSymbol Inference
14:45 2014-6-20
transmission over a channel
15:55 2014-6-20
convolution sum: "flip & slide"
15:55 2014-6-20
8 samples/bit
16:22 2014-6-20
bit cell // 4 samples/bit, 8 samples/bit
=> ISI(InterSymbol Interference)
16:24 2014-6-20
in doing the convolution sum, the impulse response h[n]
might cover more bit cell, thus came the concept of "ISI"
16:25 2014-6-20
neighboring cell
16:27 2014-6-20
neighboring bits
16:27 2014-6-20
I'm getting interference from lots of neighboring bits
16:27 2014-6-20
frequency domain
16:30 2014-6-20
the bottom line is ....
16:30 2014-6-20
I have to be concerned with contribution from neighboring bits
16:31 2014-6-20
Given h[n], how bad is ISI?
16:32 2014-6-20
compute B, how many bits "covered" by h[n]?
B = length of active portion of h[n] / N + 2
16:34 2014-6-20
Generate a test pattern
16:36 2014-6-20
transmit the test pattern over the channel
16:36 2014-6-20
instead of plot of y[n], plot the response as an "eye diagram"
1. break the plot up into short segments each containing 2N+1 samples
start at samples 0, N, 2N,...
2. plot all the short segments on top of each other
16:38 2014-6-20
generate all possible combinations of B bits(not samples)
// for 4 samples/bit, B == 6
16:49 2014-6-20
"eye diagram"
16:49 2014-6-20
Is the eye big enough, can we transmit the information
we want?
16:50 2014-6-20
"eye diagram" and h[n] both have the same information
16:50 2014-6-20
width of eye
16:50 2014-6-20
worst case signaling
16:51 2014-6-20
"width" of eye // vertical distance
16:52 2014-6-20
welcome to the wonderful world of jargon
16:52 2014-6-20
the maximize the NM(noise margin)
16:53 2014-6-20
digitization threshold
16:53 2014-6-20
How to maximize noise margin using "eye diagram"?
* pick the best sample point => widest point in the eye
* pick the best digitization threshold => half-way across width
16:56 2014-6-20
What's the point of "eye diagram"?
* choosing the best "sample point" & "digitization threshold"
* choosing the best "Samples/Bit"
17:00 2014-6-20
no eye
17:02 2014-6-20
noise margin: "the digitization threshold" to "the widest point at the eye"
17:02 2014-6-20
If I give you a noise margin, you can determine how
many "samples per bit" can use?
17:03 2014-6-20
fast channel
17:40 2014-6-20
slow channel
17:43 2014-6-20
ringing channel
17:44 2014-6-20
bit cell
17:45 2014-6-20
multi-hop networks
17:46 2014-6-20
can we recover from ISI?
solution: deconvolution
because transmit over a channel can be modeled as y[n] = x[n] * h[n]
we can recover x[n] using deconvolution
17:48 2014-6-20
if I have perfect deconvolver
17:55 2014-6-20
if w[n] is a perfect estimate of x[n]
17:56 2014-6-20
we can compute w[n] from y[n] & w[n] using "plug & chug"
18:01 2014-6-20
sensitivity to noise
18:11 2014-6-20
amplifying noise is never a good thing
18:15 2014-6-20
noisy deconvolution example
18:16 2014-6-20
stability criterion
//////////////////////////////////////////////////////
16:42 2014-6-21 Saturday
start MIT introduction to digital communication,
video 6, noise bit errors
16:42 2014-6-21
mathematical model for the noise
16:43 2014-6-21
bad things happen to good signal
16:44 2014-6-21
noise-free signals
16:44 2014-6-21
random process
16:46 2014-6-21
additive noise
16:47 2014-6-21
multipath interference
16:47 2014-6-21
unit sample response of the channel: h[n]
16:49 2014-6-21
mean, power, energy
17:00 2014-6-21
SNR == Signal Noise Ratio
17:06 2014-6-21
SNR is often measured in decibels(dB)
17:07 2014-6-21
power amplifier
17:13 2014-6-21
amplification factor
17:14 2014-6-21
signal quality degrades with lower SNR
17:14 2014-6-21
What's a "random process"?
random processes, such as noise, take on different
sequences for different trials
17:17 2014-6-21
a whole collection of trials
17:18 2014-6-21
stationary random process,
ergodic random process
17:18 2014-6-21
What is a "stationary random process"?
statistical behavior is independent of shifts
in time in a given trial.
// time invariant
17:20 2014-6-21
statistical sampling can be performed at one sample time
across different trials,
or across different sample times of the same trials with no
change in the measured result.
17:22 2014-6-21
assumption of stationarity implies histogram should converge
to a shape known as a PDF
17:24 2014-6-21
statistical distribution
17:24 2014-6-21
PDF == Probability Distribution Function
17:25 2014-6-21
by the stationary assumption, we know that
it's going to converge to some statistical distribution
// PDF
17:28 2014-6-21
this histogram converge to a shape
17:29 2014-6-21
PDF == Probability Distribution Function
17:30 2014-6-21
minus infinity, plus infinity
17:34 2014-6-21
uniform PDF
17:39 2014-6-21
What's the meaning of "ergodicity"?
Assumption of ergodicity implies the value occuring at a given
time sample, noise[k], across many different trials has the same
PDF as estimated in our previous experiemnt of many time samples
and one trial.
17:49 2014-6-21
sample value distribution
17:52 2014-6-21
mean & variance
17:52 2014-6-21
visualizing mean & variance
17:54 2014-6-21
noise on a communication channel
17:59 2014-6-21
CLT == Central Limit Theorem
18:01 2014-6-21
what's the significance of CLT(Central Limit Theorem)?
1000 trials of summing 100 random samples draw from [-1, 1]
using two different distributions.
// one is triangular PDF, the other is uniform PDF
the sum both takes on a "normal PDF"
18:03 2014-6-21
the Central Limit Theorem tells me that the sum
is normally distributed
18:05 2014-6-21
Gaussian PDF == normal PDF
18:06 2014-6-21
standard deviation:
square root of variance
18:07 2014-6-21
CDF == Cumulative Distribution Function
18:10 2014-6-21
symmetric distribution
18:13 2014-6-21
unit normal
18:16 2014-6-21
BER == Bit Error Rate
18:18 2014-6-21
corrupted by noise
18:18 2014-6-21
bit == symbol
samples/bit
--------------------------------------------------
18:28 2014-6-21
start MIT 6.02 introduction to digital communication
video 7
18:28 2014-6-21
ISI == InterSymbol Interference
BER == Bit Error Rate
18:30 2014-6-21
digital communication channel
18:30 2014-6-21
noise-free version
18:30 2014-6-21
sequence generated by a random process
18:31 2014-6-21
sigma is the standard deviation
18:34 2014-6-21
BER == Bit Error Ratio
18:47 2014-6-21
noise-free channel with no ISI
18:47 2014-6-21
noise-free signal
18:48 2014-6-21
original digital sequence
18:48 2014-6-21
mean, power
18:51 2014-6-21
p(bit error)
18:52 2014-6-21
Gaussian noise
18:52 2014-6-21
digitization threshold
18:52 2014-6-21
what is p(bit error)?
assume the channel has Gaussian noise,
p(bit error) = p(transmit "0") * p(error|transmit "0")
+ p(transmit "1") * p(error|transmit "1")
18:55 2014-6-21
the voltage is corrupted by noise
18:58 2014-6-21
BER == Bit Error Rates
== Bit Error Ratio
19:01 2014-6-21
BER(no ISI) vs. SNR
19:03 2014-6-21
so if you give me an SNR, I can compute BER
19:03 2014-6-21
BER == p(bit error)
19:05 2014-6-21
convolution sum, ISI,
unit sample response of communication channel: h[n]
samples per bit
19:22 2014-6-21
all possible transmission
19:22 2014-6-21
test sequence to generate the eye diagram
19:23 2014-6-21
eye diagram
19:25 2014-6-21
width of the eye
19:25 2014-6-21
p(bit error) with ISI
19:29 2014-6-21
the error rate because so large that the channel
becomes unusable
19:32 2014-6-21
you're much sensitive to noise if you have ISI
19:39 2014-6-21
digitization threshold
19:40 2014-6-21
choosing Vth
19:44 2014-6-21
minimizing BER
19:49 2014-6-21
minimizing BER when p(0) != p(1)
19:49 2014-6-21
triangular distribution
19:51 2014-6-21
noise PDF
19:54 2014-6-21
random noise
19:54 2014-6-21
AWGN == Additive White Gaussian Noise
19:58 2014-6-21
AWGN channel
19:58 2014-6-21
white: frequency distribution of the noise
noise has equal energy at all frequencies
19:58 2014-6-21
error corruption & detection
19:59 2014-6-21
probability distribution function for the noise
21:27 2014-6-21
Cumulative Distribution Function
21:29 2014-6-21
BER == Bit Error Rate(Bit Error Ratio)
21:30 2014-6-21
bit error
21:31 2014-6-21
What fraction of the sample we get are going to corrupted?
21:31 2014-6-21
that get our a sense of how bad the channel is
21:32 2014-6-21
we want to compare the power of signal versus the power of noise
21:49 2014-6-21
p(bit error) // without ISI
21:50 2014-6-21
corrupted by noise
21:56 2014-6-21
use PDF in anger
21:59 2014-6-21
BER(no ISI) vs. SNR
21:59 2014-6-21
p(bit error) == BER
22:02 2014-6-21
given a particular SNR, we can compute BER
using the previous formula
22:02 2014-6-21
SNR & BER
ISI & BER
22:06 2014-6-21
test sequence to generate eye diagram
22:14 2014-6-21
a channel with ISI
a channel without ISI
22:23 2014-6-21
bit error rate // BER
22:24 2014-6-21
you're much more sensitive to noise if you have a
lot of ISI
22:25 2014-6-21
signaling protocol
22:33 2014-6-21
Gaussian noise distribution
triangular noise distribution
22:38 2014-6-21
error correction & detection
//////////////////////////////////////////////////////////
13:49 2014-6-22 Sunday
start MIT 6.02 introduction to digital communication
video 8, ECC
13:49 2014-6-22
* coping with errors using packets
* checksums, CRC
* hamming distance & single error correction
* (n,k) block codes
13:50 2014-6-22
digital signaling scheme
13:52 2014-6-22
recover original signal despite small amplitude errors
introduced by ISI & noise
13:53 2014-6-22
digital data stream
13:53 2014-6-22
bit errors
13:53 2014-6-22
Gaussian PDF
13:54 2014-6-22
probability density function
13:54 2014-6-22
Bit Errors
13:56 2014-6-22
BER == Bit Error Rate // Bit Error Ratio
13:56 2014-6-22
dealing with errors: packets
13:57 2014-6-22
digitizaton of your voice
13:58 2014-6-22
divide into chunks
13:58 2014-6-22
fixed-sized packets
13:58 2014-6-22
sequence number + message + checksum
13:59 2014-6-22
request a retransmission of the packets
14:02 2014-6-22
reliable transport protocol
14:03 2014-6-22
throw things that are apparent bad, and request a retransmission
14:03 2014-6-22
check bits
14:04 2014-6-22
packet size
14:04 2014-6-22
choose a proper packet size
14:04 2014-6-22
overhead + payload
14:05 2014-6-22
engineering tradeoff
14:05 2014-6-22
check bits: hash function
14:06 2014-6-22
detecting errors
14:08 2014-6-22
likely errors
14:09 2014-6-22
calling good packets bad
14:09 2014-6-22
checksum, check bits
14:09 2014-6-22
likely errors:
* random bits(BER)
* error bursts
14:25 2014-6-22
CRC == Cyclical Redundancy Check
14:46 2014-6-22
shift register
14:49 2014-6-22
generating the CRC
14:49 2014-6-22
it will be caught by the CRC, that packet will
be flaged as bad
14:51 2014-6-22
wireless channel,
wire channel
14:52 2014-6-22
BER == Bit Error Rate
14:52 2014-6-22
packet retransmission
14:53 2014-6-22
SEC == Single Error Correction
15:05 2014-6-22
retransmission rate
15:05 2014-6-22
single-bit errors
15:05 2014-6-22
Single Error Correction
15:06 2014-6-22
ECC == Error Correction Code
SECC == Single Error Correction Code
15:07 2014-6-22
digital transmission using SECC
15:07 2014-6-22
the meaning of ECC?
reduce the number of retransmission
15:08 2014-6-22
digital channel
15:09 2014-6-22
error correction
15:09 2014-6-22
error detection & correction
15:10 2014-6-22
bit stream
15:10 2014-6-22
single-bit error
15:10 2014-6-22
Hamming distance
15:13 2014-6-22
What are the Hamming distance between these two sequence?
15:14 2014-6-22
Error Detection
15:18 2014-6-22
valid code word,
invalid code word
15:19 2014-6-22
single-bit error
15:19 2014-6-22
parity bit
15:20 2014-6-22
even parity
15:20 2014-6-22
what we need is an encoding
15:39 2014-6-22
ECC == Error Correction Code
SECC == Single Error Correction Code
15:40 2014-6-22
if D is the minimum Hamming distance between code words,
we can detect up to (D-1) bit errors
15:40 2014-6-22
valid code word,
invalid code word
15:40 2014-6-22
parity check
15:41 2014-6-22
parity bit
15:41 2014-6-22
parity(even parity)
15:42 2014-6-22
if more than 1 bit is flipped, that in technical word
is screwed
15:43 2014-6-22
this is the take-home message
15:45 2014-6-22
error detection,
error correction
15:46 2014-6-22
SECC == Single Error Correcting Codes
* use multiple parity bits
* syndrome bits
----------------------------------------------------
15:55 2014-6-22
start MIT 6.02 introduction to digital communication
video 9,
15:55 2014-6-22
* How many parity bits?
* Dealing with burst errors
* Reed-Solomon codes
15:56 2014-6-22
systematic block code
15:59 2014-6-22
to deduce which bit is bad?
16:01 2014-6-22
parity check
16:02 2014-6-22
compute a syndrome bit // error bit
16:02 2014-6-22
parity bit(P)
syndrome bit(E)
data bits(B)
16:17 2014-6-22
code word //(n, k) code
message bits + parity bits
16:17 2014-6-22
systematic block codes
16:17 2014-6-22
What is (n, k, d)?
k: message bits
n-k: parity bits
d: the minimum Hamming distance between code words
16:18 2014-6-22
code rate: k/n
16:18 2014-6-22
(n, k, d)
16:18 2014-6-22
row parity bits,
column parity bits
16:27 2014-6-22
my even parity is violated
16:31 2014-6-22
parity check
16:34 2014-6-22
parity bits
16:36 2014-6-22
How many parity bits do I need?
16:36 2014-6-22
SECC == Single Error Correcting Code
16:36 2014-6-22
syndrome bit
16:38 2014-6-22
ECC == Error Correcting Code
16:42 2014-6-22
rectangular codes with row/column parity
16:47 2014-6-22
noise model:
* Gaussian noise
* impulse noise
16:48 2014-6-22
Dealing with burst errors
16:50 2014-6-22
Can we think of a way to turn a B-bit error burst into B
single-bit errors?
16:50 2014-6-22
interleaving
16:52 2014-6-22
interleave bits
16:52 2014-6-22
at the receiver, do "deinterleaving"
16:52 2014-6-22
interleaving:
compute CRC => partition => apply ECC => interleave => transmit
16:53 2014-6-22
interleaving & deinterleaving
16:54 2014-6-22
error correction scheme
16:55 2014-6-22
almost all modern code use interleave at some level
16:56 2014-6-22
pretty nift idea
16:56 2014-6-22
interleaved block,
ECC block
16:58 2014-6-22
8b10b encoding
16:59 2014-6-22
mark the begining of these interleaved blocks
16:59 2014-6-22
How to transmit?
* packetize: {#, data, chk} // packet
* SECC: (n, k, d) // split {#, data, chk} into k-bit block, add parity bits
// code word
* 8b10b // synchronization info // frame
* bit => samples per bit voltage sample
17:06 2014-6-22
Framed bit stream: Reed-Solomon codes
unframed bit stream: convolutional codes
17:09 2014-6-22
oversampled polynomial
17:14 2014-6-22
packet => code word => frame
checksum, parity, sync
17:33 2014-6-22
finite field // Galois field
17:33 2014-6-22
R-S code
17:39 2014-6-22
CD error correction
* de-interleave // turn burst erros into single bit errors
17:41 2014-6-22
concatenated R-S codes
//////////////////////////////////////////////////////////////////
7:46 2014-6-23 Monday
start MIT introduction to digital communication
video 6, convolutional codes
7:52 2014-6-23
ECC == Error Control Codes
7:53 2014-6-23
linear block code
7:53 2014-6-23
codeword
7:54 2014-6-23
minimum Hamming distance
7:54 2014-6-23
Bi-orthogonal codes
7:55 2014-6-23
picture transmission errors
7:55 2014-6-23
Green machine
7:56 2014-6-23
data rate
7:56 2014-6-23
efficient decoding algorithm
7:56 2014-6-23
FFT == Fast Fourier Transform
7:57 2014-6-23
command system
7:57 2014-6-23
quantitative command
control command
7:59 2014-6-23
convolutional codes with Viterbi decoding
8:01 2014-6-23
combination of convolutional code:
turbo code
8:10 2014-6-23
convolutional code
8:15 2014-6-23
data rate
8:15 2014-6-23
bps == bit per second
8:15 2014-6-23
block code
8:16 2014-6-23
message bits,
parity bits
8:16 2014-6-23
parity bits are computed from blocks of message bits
8:16 2014-6-23
rate 1/r codes == r parity bits/message bits
8:18 2014-6-23
sliding window
8:18 2014-6-23
constraint length K
8:18 2014-6-23
so it's a linear combination of some "message bits"
8:21 2014-6-23
What does "convolution code" do?
message bits => parity bits // generate using convolution operation
8:25 2014-6-23
K: the "constraint length" of the code
=> greater redundancy
=> better error correction possibilities
8:34 2014-6-23
we'll transmit the parity sequence, not the message itself
// convolutional code
8:37 2014-6-23
generator
8:38 2014-6-23
shift register
8:41 2014-6-23
shift register view of convolutional code
8:42 2014-6-23
state of the register
8:51 2014-6-23
state-machine view
9:01 2014-6-23
state diagram
9:06 2014-6-23
state machine view
shift-register view
9:19 2014-6-23
unfolding things in time // state machine diagram
9:27 2014-6-23
it's the same story all over again
9:30 2014-6-23
minimum Hamming distance
9:43 2014-6-23
Trellis view
10:13 2014-6-23
constraint length
code rate
coefficient of the generator
No. of states in state machine of this code
----------------------------------------------------
10:22 2014-6-23
start MIT introduction to digital communication,
video 7, viterbi decoding
10:25 2014-6-23
key concept of decoding: Trellis
10:25 2014-6-23
parity check bits
10:26 2014-6-23
message bits, parity bits
10:28 2014-6-23
Viterbi decoding
10:36 2014-6-23
Trellis diagram vs state diagram
10:38 2014-6-23
Trellis View at Transmitter
10:58 2014-6-23
decoding: finding the ML path
11:09 2014-6-23
ML == Maximum Likelihood
11:09 2014-6-23
hard decision decoding
soft decision decoding
11:17 2014-6-23
most likely
11:22 2014-6-23
Viterbi algorithm:
want: most likely message sequence
Have: (possibly corrupted)received parity sequence
11:23 2014-6-23
intermediate stage
11:25 2014-6-23
finding the Maximum Likelihood path
11:32 2014-6-23
branch metric, path metric
11:39 2014-6-23
BM == Branch Metric
PM == Path Metric
11:41 2014-6-23
BM == Branch Metric
Hamming distance between expected parity bits &
received parity bits
11:42 2014-6-23
Hard-Decision Viterbi Decoding:
a walk through the trellis
11:48 2014-6-23
path metric: min of all paths leading to state
11:49 2014-6-23
soft branch metric:
the square of the Euclidian distance between received voltages
& expected voltages
11:51 2014-6-23
navigating through the trellis is exactly the same
11:54 2014-6-23
post-decoding BER
12:02 2014-6-23
Hamming code, convolutional code
12:03 2014-6-23
constraint length
12:10 2014-6-23
Hamming code, convolutional code
12:10 2014-6-23
block code, // Hamming code?
convolutional code
---------------------------------------------------
12:24 2014-6-23
start MIT introduction to digital communication
video 12, sharing MAC protocols
12:24 2014-6-23
shared medium => media access control // MAC
TDMA // Time Division Multiple Access
contention protocols, Alohanet
12:25 2014-6-23
shared communication channel
12:26 2014-6-23
avoid collision,
communication protocol
12:27 2014-6-23
measure good // metric
12:29 2014-6-23
good performance
12:30 2014-6-23
channel capacity is a limited resource
12:30 2014-6-23
high utilization
12:30 2014-6-23
protocol overhead
12:32 2014-6-23
MAC == Media Access Control
12:34 2014-6-23
divide resource equally along all requesters
12:34 2014-6-23
bounded wait
12:38 2014-6-23
isochronous communication // voice, video
12:39 2014-6-23
good performance:
* High utilization
* fairness
* bounded wait
* scalability
12:39 2014-6-23
dynamically accomodate more users
12:40 2014-6-23
sharing protocols:
MAC == Media Access Control
12:41 2014-6-23
time division
12:41 2014-6-23
time slot
12:41 2014-6-23
requester
12:41 2014-6-23
Time Division:
* prearranged: TDMA
* Not prearranged: contention protocol(e.g. Alohanet)
12:42 2014-6-23
contention protocols
12:43 2014-6-23
frequency division
12:43 2014-6-23
FDMA == Frequency Division Multiple Access
12:43 2014-6-23
CDMA == Code Division Multiple Access
12:45 2014-6-23
* orthogonal pseudorandom code
* dot product to select
12:50 2014-6-23
utilization:
U = total throughput over all nodes / maximum data rate of channel
12:50 2014-6-23
* backlogged
* offered load
12:51 2014-6-23
Fairness
12:56 2014-6-23
contention protocol
12:57 2014-6-23
time slot
12:57 2014-6-23
shared medium
12:57 2014-6-23
queue of packets
13:01 2014-6-23
round-robin scheme
13:04 2014-6-23
centralized resource allocator
13:06 2014-6-23
time synchronization
13:06 2014-6-23
TDMA is very good from a fairness point of view
13:08 2014-6-23
TDMA for GSM phones
13:08 2014-6-23
RAC == Random Access Channel
13:09 2014-6-23
contention protocol: Alohanet
13:13 2014-6-23
allocation is not predetermined
13:14 2014-6-23
burst data patterns
13:14 2014-6-23
Alohanet:
Alohanet was a satellite-based data networking
connecting computers on the Hawaiian islands.
13:15 2014-6-23
time slot:
* success
* idleness
* collisions
13:22 2014-6-23
slotted Aloha:
if a node is backlogged, it sends a packet in
the next time slot with probability p.
13:24 2014-6-23
maximining utilization
13:28 2014-6-23
slotted Aloha
13:29 2014-6-23
Taylor'series expansion
13:32 2014-6-23
backlogged nodes
13:36 2014-6-23
stabilization
13:38 2014-6-23
stabilizing Aloha:
finding a p that maximizes utilization as loading changes
13:38 2014-6-23
binary exponential Back-off
13:40 2014-6-23
exponentially decreasing p
13:40 2014-6-23
increasing p on success
13:41 2014-6-23
stabilized Aloha
13:44 2014-6-23
utilization is good, but fairness is awful
13:44 2014-6-23
one node "captures" the network for a period of time
13:48 2014-6-23
limiting the capture effect
--------------------------------------------------------------
MIT introduction to digital communication,
video 13, more MAC protocols
13:50 2014-6-23
unslotted Aloha
carrier sense, contention windows // CSMA/CD, CAN
code division glimpse
////////////////////////////////////////////////////////////
8:02 2014-6-24 Tuesday
contention protocols
* different nodes have different amount
* traffic is very bursty
8:03 2014-6-24
slotted Aloha
8:10 2014-6-24
maximize the utilization of the network
8:12 2014-6-24
capture effect
8:13 2014-6-24
we put an upper bound
8:14 2014-6-24
trying to transmit according to probability p,
and then adjust p up & down a little bit
8:16 2014-6-24
stabilized Aloha
8:21 2014-6-24
stabilization protocol
8:22 2014-6-24
unslotted Aloha
* packets take T time slots to transmit
* collisions are no longer "perfect"
8:23 2014-6-24
both packets are deemed corrupted
8:23 2014-6-24
it doesn't take a rocket science
8:25 2014-6-24
window of vulnerability
8:25 2014-6-24
unslotted Aloha
slotted Aloha
8:31 2014-6-24
self interference
8:31 2014-6-24
what's the maximum rate the channel can hold
-----------------------------------------------------------
19:21 2014-6-24
resume MIT introduction to digital communication,
video 13,
19:21 2014-6-24
unslotted Aloha
19:21 2014-6-24
self interference
19:23 2014-6-24
* slotted Aloha
* unslotted Aloha
19:25 2014-6-24
utilization & fairness
19:32 2014-6-24
carrier sense
19:33 2014-6-24
CSMA/CD == Carrier Sense Multiple Access with Collision Detection
19:34 2014-6-24
Carrier Sense:
reduce collisions with on-going transmissions by
transmitting only if channel appears not to be busy
19:35 2014-6-24
on-going transmission
19:37 2014-6-24
window of vunerability
19:44 2014-6-24
carrier sense => 1 slot window of vulnerability
19:44 2014-6-24
simulation of carrier sense
19:46 2014-6-24
carrier sense => increased utilization
19:48 2014-6-24
collision avoiding
19:48 2014-6-24
detection time // idle detectioin
19:48 2014-6-24
collision avoidance
19:49 2014-6-24
carrier sense + collision avoidance
19:50 2014-6-24
CW == Contention Window
19:50 2014-6-24
Carrier Sense + Collision Avoidance == Contention Window
19:52 2014-6-24
MAC == Multiple Access Control
== Media Access Control
19:55 2014-6-24
MAC protocols
19:55 2014-6-24
Goal of MAC protocols:
to maximize utilization & fairness
19:55 2014-6-24
TDMA == Time Division Multiple Access
* Round-robin sharing
* contention protocol // slotted Aloha, unslotted Aloha
distributed protocol
parameter (p, CW) adjusted
19:58 2014-6-24
centralized controller
19:58 2014-6-24
transmission experience
19:59 2014-6-24
backlogged nodes
20:00 2014-6-24
orthogonality principle
20:01 2014-6-24
FDMA == Frequency Division Multiple Access
20:01 2014-6-24
orthogonal vectors
20:02 2014-6-24
CDMA == Code Division Multiple Access
20:02 2014-6-24
chip code
20:05 2014-6-24
channel will sum the transmitted vectors
20:05 2014-6-24
How can I get back the original 4 messages?
20:06 2014-6-24
orthognality principle
dot product
20:07 2014-6-24
CDMA receiver
20:07 2014-6-24
Asynchronous CDMA
synchronous CDMA
20:11 2014-6-24
PN == Pseudo-Noise
20:13 2014-6-24
FDMA == Frequency Division Multiple Access
-----------------------------------------------------------------------
20:44 2014-6-24
complex exponential
DTFS
spectral coeffient,
band-limited signal
20:45 2014-6-24
frequency division multiplexing
20:51 2014-6-24
LTI channel
20:52 2014-6-24
your energy is still in the same frequency
20:52 2014-6-24
complex exponential
20:53 2014-6-24
periodic sequence
20:53 2014-6-24
fundamental frequency
20:54 2014-6-24
radians/sample
21:01 2014-6-24
negative frequency
21:02 2014-6-24
complex exponential
21:08 2014-6-24
sine & cosine both can be represented as
sum of complex exponential using Euler's formula
21:12 2014-6-24
DFS == Discrete-time Fourier Series
21:25 2014-6-24
spectral coefficient
21:25 2014-6-24
orthogonal basis
21:26 2014-6-24
wavelet, etc...
21:26 2014-6-24
basis expansion, projection
basis * coefficient == component
21:26 2014-6-24
synthesis equation
analysis equation
21:36 2014-6-24
time domain <=> frequency domain
21:37 2014-6-24
time domain sequence => frequency domain coefficient
the coefficient Ak are also complex numbers!
21:43 2014-6-24
spectrum of digital communication
21:51 2014-6-24
band-limited signal
21:52 2014-6-24
effect of band-limiting a transmission
21:53 2014-6-24
the eye is starting to close
------------------------------------------------
22:03 2014-6-24
start MIT introduction to digital communication
video 15, Frequency Response
22:04 2014-6-24
fundamental frequency, harmonics
22:09 2014-6-24
complex conjugate
22:12 2014-6-24
complex exponential is the eigenfunction of LTI system
frequency response is the eigenvalue
22:14 2014-6-24
sine & cosine are just sums of complex exponential
22:15 2014-6-24
LTI => convolution sum
22:15 2014-6-24
from convolution sum: x[n] * h[n], we have
frequency response
22:16 2014-6-24
characterize LTI system:
h[n] // impulse, in time domain
H // frequency response, in frequency domain
22:22 2014-6-24
the frequency response tells us how the system will
affect each of the spectral coefficient that determine
the input...
22:25 2014-6-24
unit sample, unit sample response
22:26 2014-6-24
unit sample response: h[n]
frequency response: H
22:28 2014-6-24
MA == Moving Average
22:33 2014-6-24
moving average is a way of throwing away high frequencies
22:35 2014-6-24
DTFT == Discrete-Time Fourier Transform
22:43 2014-6-24
building something let a particular frequency to go away
22:46 2014-6-24
series interconnection of LTI system
h1[n] * h2[n] // time domain impulse response
H1[n] . H2[n] // frequency domain frequency response
22:50 2014-6-24
convolution in the time domain <=> multiplication in the frequency domain
// convolution theorem
22:52 2014-6-24
LPF == Low Pass Filter
22:53 2014-6-24
unit sample <=> frequency response
// DFT, IDFT
23:08 2014-6-24
zero padding
23:11 2014-6-24
non-causal h[n] // non-causal impulse response
23:11 2014-6-24
if you are doing real-time signal processing, then
causality of h[n] is important, so
adding an N/2 sample delay
23:22 2014-6-24
BPF == Band Pass Filter
23:26 2014-6-24
FFT == Fast Fourier Transform
23:28 2014-6-24
spectral coefficient
23:28 2014-6-24
frequency response of channels
23:31 2014-6-24
fast channel, slow channel, ringing channel
////////////////////////////////////////////////////////
18:55 2014-6-25 Wednesday
start MIT introduction to digital communication,
video 16, modulation
* sharing the frequency
* modulation
* demodulation
18:56 2014-6-25
k // the spectral coefficient index
18:57 2014-6-25
band-limited signal
19:06 2014-6-25
baseband
19:06 2014-6-25
spectral coefficient
19:18 2014-6-25
modulation
19:18 2014-6-25
digital transmission waveform
19:32 2014-6-25
burst of carrier frequency
19:34 2014-6-25
demodulation + LPF
19:55 2014-6-25
this is the energy I want
19:57 2014-6-25
demodulation carrier
20:03 2014-6-25
multiple transmitters
20:09 2014-6-25
start MIT introductiont to digital communication,
video 17, more modulation
20:18 2014-6-25
* mismatch of receiver's frequency & pahse
* quadrature demodulation
* BPSK, QPSK, DQPSK, QAM
20:18 2014-6-25
standard digital transmission sequence
20:27 2014-6-25
frequency error in demodulator
20:42 2014-6-25
pilot tone
20:45 2014-6-25
phase error in demodulator
// frequency is the same, phase differs
20:46 2014-6-25
channel delay
21:02 2014-6-25
the reason you cannot hear me is that the phase
error has caused a scaling
21:08 2014-6-25
scaling factor:
* phase error
* channel delay
21:08 2014-6-25
fixing phase problems in the receiver:
quadrature demodulation
21:10 2014-6-25
Inphase, Quadrature
21:11 2014-6-25
quadrature modulation
21:23 2014-6-25
3 things:
* magnitude
* frequency
* phase
21:30 2014-6-25
AM == Amplitude Modulation
FM == Frequency Modulation
PM == Phase Modulation
21:31 2014-6-25
Phase Modulation == PSK // Phase Shift Keying
21:34 2014-6-25
BPSK == Binary Phase-Shift Keying
21:35 2014-6-25
the message bit selects one of the two pahses for
the carrier: pi/2, -pi/2
21:36 2014-6-25
BPSK: dealing with phase ambiguity
think the phase encoding as differential
21:43 2014-6-25
change in phase // encode my information
21:44 2014-6-25
differential encoding
21:44 2014-6-25
differential phase encoding
21:45 2014-6-25
* encode bits
* encode transitions
21:46 2014-6-25
DBPSK == Differential Binary Phase-Shift Keying
DQPSK == Differential Quadrature Phase-Shift Keying
21:47 2014-6-25
Differential PSK
21:55 2014-6-25
the idea of putting more points in the constellation
is a good idea!
21:55 2014-6-25
QAM == Quadrature Amplitude Modulation
21:59 2014-6-25
// QAM
using more message bits => generate larger constellations
21:59 2014-6-25
constellation points
21:59 2014-6-25
QPSK // QAM-4
22:01 2014-6-25
using QAM-16, 4 message bits are encoded into
one of 16 constellations points.
22:04 2014-6-25
QAM receiver
////////////////////////////////////////////////////
7:44 2014-6-26
start MIT introduction to digital communication,
video 18, switching
7:44 2014-6-26
point-to-point communication
7:44 2014-6-26
multi-hop communication
7:44 2014-6-26
switch
7:48 2014-6-26
packet switching <=> circuit switching
7:48 2014-6-26
MTBF == Mean Time Between Failure
7:51 2014-6-26
network reliability
7:52 2014-6-26
Redundancy:
* no single point of failure
* fail-safe, fail-safe, fail-hard
* automated adaption to component failure
8:00 2014-6-26
network scalability
8:03 2014-6-26
IPv4, IPv6
8:06 2014-6-26
incremental build-out
8:10 2014-6-26
NRE(non-recurring expenses, one-time-costs)
8:13 2014-6-26
dealing with system complexity:
manage complexity with abstraction layer
8:14 2014-6-26
easier to design if details have been abstract away
8:15 2014-6-26
layed abstraction
8:15 2014-6-26
API == Application Programming Interface
8:24 2014-6-26
abstraction layer
8:24 2014-6-26
network topologies
* point-to-point channels // simplex, half-duplex, full-duplex
* multiple-access channels
* LAN & WAN
8:48 2014-6-26
full-connected graph
----------------------------------------------
13:43 2014-6-26
fully connected network
13:43 2014-6-26
the disadvantage of fully-connected network is
it's very expensive
13:46 2014-6-26
the othe end of the world is "start connectivity"
13:47 2014-6-26
network architecture
13:57 2014-6-26
centralized network => decentralized network => distributed network
13:57 2014-6-26
modern networks: LANs + WANs
14:01 2014-6-26
sharing the internetwork links:
* circuit switching // isochronous
* packet switching // asynchronous
14:04 2014-6-26
circuit switching:
* first establish a circuit between end points
* tear down(close) circuit
14:08 2014-6-26
multiplexing/demultiplexing
14:09 2014-6-26
packet switching:
* used in the internet
* data is sent in packets
// header contains control info(source, dest addr)
* per-packet routing
14:17 2014-6-26
router, routing table
14:19 2014-6-26
packets are organized using a queue
14:19 2014-6-26
best efforts delivery
14:21 2014-6-26
acknowledgement/retransmission protocol
14:22 2014-6-26
end-to-end argument
14:24 2014-6-26
queues are essential
14:24 2014-6-26
the longer the queue is, the longer the delay
pro: absorb bursts
con: add delay
14:26 2014-6-26
Little's law
14:27 2014-6-26
FIFO delivering mechanism
14:29 2014-6-26
average rate of delivery
14:29 2014-6-26
mean delay for packets
-----------------------------------------------
17:49 2014-6-26
packet switching network
17:49 2014-6-26
Little's law
17:50 2014-6-26
packet find their way through the networks
17:50 2014-6-26
finding the shortest path
17:56 2014-6-26
routing: who's linked to whom?
17:59 2014-6-26
routing: what path the packet should follow?
17:59 2014-6-26
What am I supposed to do with this packet?
18:01 2014-6-26
IP filter
18:01 2014-6-26
TCP/IP // TCP is reliable network protocol
18:01 2014-6-26
VoIP use data diagrams
18:02 2014-6-26
header checksum
18:03 2014-6-26
information in the header:
destination address,
CRC,
TTL // Time-To-Live
18:06 2014-6-26
routing table
18:06 2014-6-26
a quick table lookup using the destination
as a key!
18:08 2014-6-26
routing table is the key to everything, everything
else is relatively straightforward
18:12 2014-6-26
every individual switch has its own routing table
18:20 2014-6-26
just think of this as a giant dictionary in python
18:20 2014-6-26
shortest path routing
18:21 2014-6-26
mean cost routing
18:21 2014-6-26
distributed approach:
each switch build its own routing table based on the
information it receives, then making its own routing decisions
18:23 2014-6-26
distributing the overhead over the whole network
18:24 2014-6-26
no centralized approach
18:24 2014-6-26
link-state protocol
18:25 2014-6-26
distance-vector routing
18:25 2014-6-26
there is cheaper path using the other links
18:26 2014-6-26
B only makes only a local decision
18:28 2014-6-26
DV protocol // Distance-Vector protocol
18:28 2014-6-26
"Hello" packet
18:34 2014-6-26
If I don't hear from John for a while... // no "hello" packet,
so I delete John from my neighbour list,
who my live neighbour is?
18:34 2014-6-26
distance-vector routing
18:36 2014-6-26
* distance-vector routing
* link-state routing
18:42 2014-6-26
so I considering using you as my way to
get to MIT
18:43 2014-6-26
Bellman-Ford algorithm
18:45 2014-6-26
routing table
18:53 2014-6-26
I can get to myself with zero cost
18:54 2014-6-26
DV advertisement
18:54 2014-6-26
take multiple generation of these advertisement
18:57 2014-6-26
B & C will also send advertisement about A
19:02 2014-6-26
happily, they send advertisement to D & E
19:02 2014-6-26
sending advertisement
19:03 2014-6-26
after 2nd generation of advertisement
19:06 2014-6-26
I find that I can get to A more cheaply,
so I "update routing table"
19:07 2014-6-26
update routing table
19:07 2014-6-26
this is the steady-state
19:07 2014-6-26
Distance-Vector protocol
19:11 2014-6-26
DV routing // Distance-Vector Routing
19:11 2014-6-26
gateway node // don't give you what they know
19:13 2014-6-26
gateway to the outside world
19:13 2014-6-26
at the next generation of advertisement
19:21 2014-6-26
update routing table
19:22 2014-6-26
partition the network
-----------------------------------------------
19:33 2014-6-26
Routing:
* distance-vector routing
* link-state routing
19:33 2014-6-26
DV == Distance Vector
19:34 2014-6-26
we want to equip each node with a "routing table"
19:34 2014-6-26
shortest path
19:46 2014-6-26
update routing table
19:46 2014-6-26
distance-vector routing
19:47 2014-6-26
What is a DV(Distance-Vector) routing?
* advertisement
at each ADVERT interval, nodes tell neighbors (dest, cost) for
all routers in their routing tabel
* update
nodes add link cost to neighbour's routing costs and keep
their routing table up-to-date with shortest-path route
19:52 2014-6-26
link is down
19:55 2014-6-26
down link
19:55 2014-6-26
An unfortunate combination of down links
might partition the network
19:58 2014-6-26
count for infinity
20:01 2014-6-26
infinite cost, which means we don't have a route
20:04 2014-6-26
packet A just bounce back & forth the network
20:05 2014-6-26
How does we solve routing loop problem?
we have the TTL(Time-To-Live) field in the packet header
20:06 2014-6-26
routing loop
20:09 2014-6-26
DV == Distance Vector
PV == Path Vector
20:10 2014-6-26
PV(Path-Vector) Routing
20:10 2014-6-26
advertisement both (path, cost)
20:10 2014-6-26
distance vector routing
path vector routing
20:11 2014-6-26
send advertisement, building routing tables
20:11 2014-6-26
pros & cons of PV routing:
pros: simple, works well for small network
cons: ony works for small network
20:12 2014-6-26
unreachable nodes are quickly removed from tables
20:14 2014-6-26
Path-Vector routing only works for small networks
20:15 2014-6-26
hierarchical structured local routing
20:17 2014-6-26
Link-State Routing
20:17 2014-6-26
LSA == Link-State Advertisement
20:18 2014-6-26
which neighbour is currently live?
20:18 2014-6-26
continually update on people who I can get to?
20:21 2014-6-26
Link-State Routing
* Advertisement step
* Integration
20:22 2014-6-26
LSA flooding
20:28 2014-6-26
LSA == Link-State Advertisement
20:30 2014-6-26
* LSA travels each link in each direction
* termination: each node rebroadcasts LSA exactly once
* all reachable nodes eventually hear every LSA
20:32 2014-6-26
rebroadcast LSA
20:32 2014-6-26
LSA flooding
20:32 2014-6-26
link-state table
20:33 2014-6-26
LSA table // Link-State Advertisement table
20:33 2014-6-26
how straightforward routing is!
20:34 2014-6-26
What eventually will be in your routing table?
20:35 2014-6-26
gateway node
20:36 2014-6-26
Dijkstra's Shortest Path Algorithm
20:37 2014-6-26
nodeset
spcost == shortest path cost
20:42 2014-6-26
link-statement announcement
20:44 2014-6-26
heap queue, priority queue
20:48 2014-6-26
Why is network routing hard?
20:53 2014-6-26
inside domain: interior router
between domain: boarder router
20:57 2014-6-26
Hierarchical Routing
20:58 2014-6-26
DV protocol: Distance-Vector protocol
LS protocol: Link-State protocol
-------------------------------------------------
21:54 2014-6-26
review link-state routing
21:54 2014-6-26
DV routing // Distance-Vector Routing
21:55 2014-6-26
DV advertisement: (dest, cost)
each node tells their neighbour (dest, cost)
at the ADVERT interval
21:59 2014-6-26
update routing table
22:00 2014-6-26
routing advertisement
22:01 2014-6-26
2nd generation of routing advertisement
22:01 2014-6-26
break a link(link is down)
22:08 2014-6-26
down link
22:09 2014-6-26
miss a "hello" packet
22:09 2014-6-26
loss packet
22:09 2014-6-26
chop the network into pieces! // more down link
22:10 2014-6-26
partition the network
22:11 2014-6-26
Bellman-Ford algorithm
22:12 2014-6-26
DV == Distance Vector
PV == Path Vector
22:15 2014-6-26
packet A just bound around the network,
this is just a "routing loop"
22:17 2014-6-26
routing loop
22:17 2014-6-26
we have the "Time-To-Live" field in
the packet header
22:17 2014-6-26
TTL == Time-To-Live
22:17 2014-6-26
imperfect information about the available routes
22:18 2014-6-26
bad engineers build things that work in the perfect world,
22:20 2014-6-26
defend against a lot of possibilities
22:21 2014-6-26
What is a PV routing?
Path-Vector Routing is an improvement of Distance-Vector Routing,
instead of advertise (distance, cost), advertise (path, cost)!
so it can detect "routing loop"
22:22 2014-6-26
sending a lot of advertisement & building our routing tables
22:24 2014-6-26
by using Path-Vector Routing, unreachable nodes
are quickly removed from tables
22:26 2014-6-26
it really only works for small networks
22:26 2014-6-26
Hierarchical structured local routes
22:29 2014-6-26
LS Routing // Link-State Routing
22:29 2014-6-26
Bellman-Ford algorithm only gets information from neighbours
22:29 2014-6-26
Can we devise an algorithm in which
all nodes get topological information about the whole network?
then using "Shortest Path" algorithm locally
22:30 2014-6-26
LSA == Link-State Advertisement
22:31 2014-6-26
LSA:
* send information about its "links" to its neighbours
// instead of its own (distance, cost)
* do ti periodically
22:33 2014-6-26
integration:
if #seq in incoming LSA > seq# in saved LSA,
I knew it's a new anouncement!
I then update LSA,
rebroadcast to my neighbours // flooding
22:37 2014-6-26
I rebroadcast to my neighbours
22:38 2014-6-26
source code
22:40 2014-6-26
eventually each node discovers current map of the network
22:40 2014-6-26
building routing table based on this LSA(Link-State Advertisement)
22:41 2014-6-26
LSA flooding
22:42 2014-6-26
LSA == Link-State Anouncement
22:42 2014-6-26
each node eventually hears every LSA
22:45 2014-6-26
verification is aided by simplicity
22:45 2014-6-26
routing strategy
22:45 2014-6-26
LS Routing
22:46 2014-6-26
Bellman-Ford based on only gets information about your neighbours
22:46 2014-6-26
LS routing: all the nodes know about the topology of the network,
then they run their shortest path algorithm locally
=> Link-State Routing
22:46 2014-6-26
LS Routing only talks about
advertisement I sent out instead of just talking about my
routing table, only talk about the links:
* which neighbours I have?
* How much cost to reach each neighbour?
22:49 2014-6-26
the only information about the advertisement is which
links is current live from that node?
22:50 2014-6-26
everytime I just increment the sequence number, so every node
knows this is a new advertisement
22:51 2014-6-26
continually updating people who I can get to
22:52 2014-6-26
If the incoming anouncement from George is a newer version
of advertisement(the sequence number is bigger than the sequence
number I have), I knew it is a new Link-State Anouncement(LSA)
22:54 2014-6-26
flooding
22:54 2014-6-26
I rebroadcast to my neighbours
22:54 2014-6-26
send LSA
22:55 2014-6-26
LSA flooding the network
22:55 2014-6-26
eventually the LSA from everybody will reach everybody else
22:55 2014-6-26
we keep the same source node
22:56 2014-6-26
If for a long time, we don't hear LSA from a node(seq# is too far out-of-date),
then we remove saved LSAs!
22:57 2014-6-26
result: each node discovers current map of the network
22:58 2014-6-26
building routing table:
* periodically each node runs the same "shortest path algorithm" over its map
* if each node implements computation correctly & each node has the same map,
then routing table will be correct
22:59 2014-6-26
LSA flooding // Link-State Anoucement Flooding
23:00 2014-6-26
each node rebroadcast LSA exactly once
23:00 2014-6-26
LSA table
23:02 2014-6-26
neighbours & costs
23:02 2014-6-26
How straightforward routing is !
23:03 2014-6-26
gateway node in my network
23:04 2014-6-26
Dijkstra's Shortest Path Algorithm
23:16 2014-6-26
go to bed
//////////////////////////////////////////////////
6:53 2014-6-27 Friday
video 21, reliable data transportation
6:53 2014-6-27
* redundancy via careful retransmission
* sequence numbers & acks
* RTT estimation & timeouts
* stop-and-wait protocol
6:55 2014-6-27
best-effort network
7:07 2014-6-27
communicate reliably
7:15 2014-6-27
the problem:
* packets may be lost arbitrarily
* packets may be reordered arbitrarily
* packets delays are variable
* packets may even be duplicated
7:17 2014-6-27
Send S & Reciever R want to communicate reliably
7:17 2014-6-27
reliable transport protocol
7:18 2014-6-27
application "layed above" transport protocol
7:18 2014-6-27
transmitter:
each packet includes a sequentially increasing sequence number
7:27 2014-6-27
(xmit time, packet)
7:29 2014-6-27
un-ACKed list
7:30 2014-6-27
ACK == acknowledgement
7:30 2014-6-27
unacknowledge list
7:35 2014-6-27
stop & wait protocol
7:36 2014-6-27
timeout:
some time constant which I can tell whether I
should retransmit
7:38 2014-6-27
stop & wait protocol
7:39 2014-6-27
RTT == Round-Trip Time
7:39 2014-6-27
data loss + retransmission
7:41 2014-6-27
lose acknowledgement
7:42 2014-6-27
Receiver deliver packet payload to application
in sequence number order
7:44 2014-6-27
retransmission
7:48 2014-6-27
packet buffer @transmitter @receiver
7:49 2014-6-27
RTT == Round-Trip Time
7:50 2014-6-27
choose appropriate timeout value
7:51 2014-6-27
change my estimate of what the RTT(Round-Trip Time) is?
7:53 2014-6-27
RTT measurement
7:55 2014-6-27
CDF of RTT
8:01 2014-6-27
CDF == Cumulative Distribution Function
determine timeout value from the "CDF of RTT" diagram
8:05 2014-6-27
RTT can be highly variable
8:10 2014-6-27
estimating RTT from data
8:11 2014-6-27
Chebyshev's Inequality
8:15 2014-6-27
standard deviation
8:16 2014-6-27
we need some more incremental method
8:25 2014-6-27
EWMA == Exponential Weighted Moving Average
8:25 2014-6-27
the RTT is changed dramatically each time I sample it,
so filter it with EWMA!
8:28 2014-6-27
srtt == smoothed RTT
8:36 2014-6-27
EWMA for smoothed RTT // srtt
8:36 2014-6-27
use another EWMA for smoothed RTT deviation(srttdev)
8:39 2014-6-27
timeout = srtt + k * srttdev
TCP use k = 4
8:40 2014-6-27
1 packet every T seconds
throughput = 1 / T
8:41 2014-6-27
we cannot just assume T = RTT since packets get lost
8:49 2014-6-27
RTT == Round-Trip Time
8:54 2014-6-27
sliding window protocol
provided by TCP
------------------------------------------------
9:14 2014-6-27
start MIT introduction to digital communication,
video 22, sliding window protocol
9:14 2014-6-27
stop & wait protocol too slow
9:16 2014-6-27
small identifier: sequence number
9:17 2014-6-27
1 packet per RTT
9:19 2014-6-27
with packet loss & timeout, throughput even lower
9:22 2014-6-27
in order to improve performance:
solution: use a window
* allow W packets outstanding in the network at once // W == Window Size
* overlap transmissions with ACKs
9:23 2014-6-27
sliding window
9:25 2014-6-27
pipelining
9:25 2014-6-27
the window slides, packet 2 ~ 6 now standing out
9:28 2014-6-27
sliding widnow implementation
9:28 2014-6-27
sliding window <=> stop & wait
9:31 2014-6-27
un-ACKed list
9:33 2014-6-27
undelivered packet queue
9:38 2014-6-27
duplicate packet
9:38 2014-6-27
acknowledgement
9:38 2014-6-27
sliding window protocol
9:42 2014-6-27
list of undelivered packet
9:42 2014-6-27
packet buffer
9:46 2014-6-27
Sliding Window Size
9:58 2014-6-27
Setting the Window Size:
Applying Little's Law
10:00 2014-6-27
W == #packets in window
B == rate of slowest(bottleneck) link
RTT == avg delay
10:04 2014-6-27
if W = B * RTT, path will be fully utilized
10:04 2014-6-27
bandwidht-delay product // key concept in transport protocols
10:04 2014-6-27
bottleneck link
10:08 2014-6-27
propogation delay
10:22 2014-6-27
Round Trip propogation delay
10:23 2014-6-27
sliding window transport protocol
---------------------------------------------------------------------
10:57 2014-6-27
review:
stop & wait protocol too slow =>
sliding window protocol
10:59 2014-6-27
RTT == Round-Trip Time
11:01 2014-6-27
solution: use a window
* allow W packets outstanding in the network at once // W == Window Size
* overlap transmissions with ACKs
11:03 2014-6-27
sliding window
11:06 2014-6-27
sliding window in action
11:07 2014-6-27
fixed-size sliding window
11:09 2014-6-27
unacknowldged packets
11:18 2014-6-27
duplicate receptiion
------------------------------------------------
12:39 2014-6-27
stop watching OCW videos!!!
MIT 6.02 introduction to EECS II(digital communication) video notes
最新推荐文章于 2021-06-28 06:07:45 发布