Stochastic process
大部分来自wiki
虽然想着英文好理解一些, 但是自己写还是会有好多用错词 的啊(
就当latex练习好了
a mathematical object defined as a family of random variables.
Definitions
Stochastic process
- a collection of random variables indexed by some set.
- numerical values of some system randomly changing over time.
Random Function
a stochastic process can also be interpreted as a random element in a function space.
Random Field
If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead.
Discrete-time & Continuous-time Stochastic Processes
(When interpreted as time, ) The index set has a finite or countable number of elements or not.
State Space
where each random variable takes values from.
Discrete/Integer-valued SP
- state space: integers or natural numbers
Real-valued SP
- state space: real line
N-dimensional Vector Process
- state space: n-d Euclidean space
Notation
probability space
(Ω,F,P) (\Omega,F,P) (Ω,F,P)
where Ω\OmegaΩ is a sample space, FFF is a σ\sigmaσ-algebra, PPP is a probability measure.
measurable space
(S,Σ) (S,\Sigma) (S,Σ)
while SSS is the state space.
stochastic process
{X(t):t∈T} \{X(t):t\in{T}\} {X(t):t∈T}
while X(t)X(t)X(t) refer to the random variable with the index ttt
TTT is called the index set or parameter set.
distribution function Ft1,t2,⋯ ,ti(x1,x2,⋯ ,xi)F_{t_1, t_2, \cdots, t_i}(x_1,x_2,\cdots,x_i)Ft1,t2,⋯,ti(x1,x2,⋯,xi)
Ft1,t2,⋯ ,tn(x1,x2,⋯ ,xn)=P{X(t1)≤x1,X(t2)≤x2,⋯ ,X(tn)≤xn} F_{t_1,t_2,\cdots,t_n}(x_1,x_2,\cdots,x_n) = P\{X(t_1)\leq x_1,X(t_2)\leq x_2,\cdots,X(t_n)\leq x_n\} Ft1,t2,⋯,tn(x1,x2,⋯,xn)=P{X(t1)≤x1,X(t2)≤x2,⋯,X(tn)≤xn}
If the distribution is independent,
P{X(t1)≤x1,X(t2)≤x2}=P{X(t1)≤x1}P{X(t2)≤x2}
P\{X(t_1)\leq x_1,X(t_2)\leq x_2\}=P\{X(t_1)\leq x_1\}P\{X(t_2)\leq x_2\}
P{X(t1)≤x1,X(t2)≤x2}=P{X(t1)≤x1}P{X(t2)≤x2}
mean function mX(t)m_X(t)mX(t)
mX(t)=EX(t),t∈T m_X(t)=EX(t), t\in T mX(t)=EX(t),t∈T
covariance function BX(s,t)B_X(s,t)BX(s,t)
BX(s,t)=E[(X(s)−mX(s))(X(t)−mX(t))] B_X(s,t) = E[(X(s)-m_X(s))(X(t)-m_X(t))] BX(s,t)=E[(X(s)−mX(s))(X(t)−mX(t))]
variance function DX(t)=BX(t,t)D_X(t)=B_X(t,t)DX(t)=BX(t,t)
DX(t)=σX2(t)=E[(X(t)−mX(t))2]=EX2(t)−mX(t)2=EX2(t)−(EX(t))2 D_X(t)=\sigma^2_X(t) = E[(X(t)-m_X(t))^2] = EX^2(t)-m_X(t)^2 = EX^2(t)-(EX(t))^2 DX(t)=σX2(t)=E[(X(t)−mX(t))2]=EX2(t)−mX(t)2=EX2(t)−(EX(t))2
Correlation coefficient RX(s,t)R_X(s, t)RX(s,t)
RX(s,t)=E[X(s)X(t)] R_X(s,t)=E[X(s)X(t)] RX(s,t)=E[X(s)X(t)]
while mX(t)m_X(t)mX(t) is the mean value of X(t)X(t)X(t), DX(t)D_X(t)DX(t) is the offset of X(t)X(t)X(t) to mean value at time ttt ,
BX(s,t)B_X(s,t )BX(s,t)&RX(s,t)R_X(s,t)RX(s,t) represents the relevance of SP {X(t),t∈T}\{X(t), t\in T\}{X(t),t∈T} from different time s,ts,ts,t .
variance D(x)D(x)D(x)
DX=EX2−(EX)2 DX=EX^2-(EX)^2 DX=EX2−(EX)2
integral representation of E[f(X)],X∼U(0,T)E[f(X)], X\sim U(0, T)E[f(X)],X∼U(0,T)
if X∼U(0,T)X\sim U(0, T)X∼U(0,T):
E[f(X)]=1T∫0Tf(x)dx
E[f(X)]=\frac{1}{T}\int_{0}^{T}f(x)dx
E[f(X)]=T1∫0Tf(x)dx
representing the mean value of every possible X in (0, T)
Process with Orthogonal Increments
stochastic process {X(t),t∈T}\{X(t), t\in T \}{X(t),t∈T} ,
if EX(t)=0EX(t) =0EX(t)=0, and t1<t2≤t3<t4∈T:E[(X(t2)−X(t1))(X(t4)−X(t3)‾)]=0t_1 \lt t_2 \leq t_3 \lt t_4 \in T : E[(X(t_2)-X(t_1))\overline{(X(t_4)-X(t_3)})]=0t1<t2≤t3<t4∈T:E[(X(t2)−X(t1))(X(t4)−X(t3))]=0,
X(t),t∈T{X(t), t\in T}X(t),t∈T is a process with orthogonal increments.
Specially, if T=[a,∞)T=[a, \infty)T=[a,∞) and X(a)=0X(a)=0X(a)=0,
BX(s,t)=RX(s,t)=σX2(min(s,t))
B_X(s,t)=R_X(s,t)=\sigma_X^2(min(s,t))
BX(s,t)=RX(s,t)=σX2(min(s,t))
Normal distribution
N(μ,σ2)EX=μ,DX=σ2 N(\mu,\sigma^2) \\EX=\mu, DX=\sigma^2 N(μ,σ2)EX=μ,DX=σ2
Specially, in standard Normal distribution,
μ=0,σ2=1
\mu=0, \sigma^2=1
μ=0,σ2=1
Poisson process
It can be defined as a counting process, which represents the random number of events up to some time.
P{X(t+s)−X(s)=n}=e−λt(λt)nn!
P\{X(t+s)-X(s)=n\}=e^{-\lambda t} \frac{(\lambda t)^n}{n!}
P{X(t+s)−X(s)=n}=e−λtn!(λt)n
- has the natural numbers as its state space and the non-negative numbers as its index set.
let X(t),t≥0{X(t), t \geq 0}X(t),t≥0 be a Poisson process, for $t,s \in [0,\infty) $ and s≤ts \le ts≤t ,
E[X(t)−X(s)]=D[X(t)−X(s)]=λ(t−s)
E[X(t)-X(s)]=D[X(t)-X(s)]=\lambda(t-s)
E[X(t)−X(s)]=D[X(t)−X(s)]=λ(t−s)
since X(0)=0X(0)=0X(0)=0,
mX(t)=λtσx2(t)=λtBX(s,t)=λs
m_X(t) = \lambda t \\
\sigma^2_x(t)=\lambda t \\
B_X(s,t)=\lambda s
mX(t)=λtσx2(t)=λtBX(s,t)=λs
normally,
BX(s,t)=λmin(s,t)
B_X(s,t)=\lambda\min(s,t)
BX(s,t)=λmin(s,t)
Poisson distribution
P(λ)P(X=k)=λkk!e−λEX=DX=λ P(\lambda) \\ P(X=k)=\frac{\lambda^k}{k!}e^{-\lambda} \\ EX = DX = \lambda P(λ)P(X=k)=k!λke−λEX=DX=λ
Compound Poisson process
if {N(t),t≥0}\{N(t), t \geq 0 \}{N(t),t≥0} is a Poisson process of λ\lambdaλ,
{Yk,k=1,2,⋯ }\{Y_k, k=1,2,\cdots\}{Yk,k=1,2,⋯} is a set of independent and identically distributed random variables,
and is independent to {N(t),t≥0}\{N(t), t \geq 0\}{N(t),t≥0},
X(t)=∑k=1N(t)Yk, t≥0,
X(t)=\sum_{k=1}^{N(t)}Y_k,\ \ t\geq0,
X(t)=k=1∑N(t)Yk, t≥0,
{X(t),t≥0}\{X(t),t\geq0\}{X(t),t≥0} is a Compound Poisson process.
E[X(t)]=λtE(Y1)D[X(t)]=λtE(Y1)2
E[X(t)]=\lambda t E(Y_1)
\\ D[X(t)]=\lambda t E(Y_1)^2
E[X(t)]=λtE(Y1)D[X(t)]=λtE(Y1)2
Markov Chain
probability transition matrix as
P=[pij]
P = [p_{ij}]
P=[pij]
Two-step transition probability matrix as
P(2)=PP
P^{(2)}=PP
P(2)=PP
State classification of Markov chain
assume state space I={1,2,⋯ ,9}I=\{1,2,\cdots ,9\}I={1,2,⋯,9} ,
for state 1, the step TTT is the steps it takes to go back from state 1
for set {n:n≥1,pii(n)>0}\{n:n\geq1,p_{ii}^{(n)}\gt0\}{n:n≥1,pii(n)>0},
d=d(i)=G.C.D{n:pii(n)>0}
d = d(i)=G.C.D\{n:p_{ii}^{(n)}>0\}
d=d(i)=G.C.D{n:pii(n)>0}
ddd is the cycle of state iii,
if d>1d\gt1d>1, state iii is periodic,
if d=1d=1d=1, state iii is aperiodic.
fij=∑n=1∞fijn
f_{ij}=\sum_{n=1}^{\infty}f_{ij}^{n}
fij=n=1∑∞fijn
fijf_{ij}fij is the the probability that i can finally reach j,
when fii=1f_{ii}=1fii=1 , the state iii is recurrent. The necessary and sufficient condition is
∑n=0∞pii(n)=∞
\sum_{n=0}^{\infty}p_{ii}^{(n)}=\infty
n=0∑∞pii(n)=∞
Specially, state iii is ergodic state if it is aperiodic & recurrent.
stationary distribution
{πj=∑i∈Iπipij,∑j∈Iπj=1,πj≥0, \begin{cases} \pi_j=\sum_{i \in I}\pi_i p_{ij} ,\\\\ \sum_{j \in I}\pi_j =1, \pi_j \geq 0, \end{cases} ⎩⎪⎨⎪⎧πj=∑i∈Iπipij,∑j∈Iπj=1,πj≥0,
the expected time
μi=1πi
\mu_i=\frac{1}{\pi_i}
μi=πi1
The Birth Death process
{X(t),t≥0}\{X(t),t\geq 0\}{X(t),t≥0} is a Birth Death process when
{pi,i+1(h)=λih+o(h),λi>0,pi,i−1(h)=μih+o(h),μi>0,μ0=0,pii(h)=1−λih−μih+o(h),pij(h)=o(h),∣i−j∣≥2,
\begin{cases}
p_{i,i+1}(h)=\lambda_ih+o(h),&\lambda_i>0,\\
p_{i,i-1}(h)=\mu_ih+o(h),&\mu_i>0,\mu_0=0,\\
p_{ii}(h)=1-\lambda_ih-\mu_ih+o(h),\\
p_{ij}(h)=o(h), &|i-j|\geq2,
\end{cases}
⎩⎪⎪⎪⎨⎪⎪⎪⎧pi,i+1(h)=λih+o(h),pi,i−1(h)=μih+o(h),pii(h)=1−λih−μih+o(h),pij(h)=o(h),λi>0,μi>0,μ0=0,∣i−j∣≥2,
Kolmogorov forward equation
pij′(t)=λj−1pi,j−1(t)−(λj+μj)pij(t)+μj+1pi,j+1(t), i,j∈I p'_{ij}(t)=\lambda_{j-1}p_{i,j-1}(t)-(\lambda_j+\mu_j)p_{ij}(t)+\mu_{j+1}p_{i,j+1}(t),\ \ \ \ i,j \in I pij′(t)=λj−1pi,j−1(t)−(λj+μj)pij(t)+μj+1pi,j+1(t), i,j∈I
希望不会挂科吧。。