Partial differential equations and the finite element method 2--SECOND-ORDER ELLIPTIC PROBLEMS

本文探讨了线性二阶椭圆问题的经典解及其弱形式,包括Dirichlet、Neumann和Robin边界条件下的求解方法,并介绍了能量泛函最小化的数值方案。

This part is devoted to the discussion of linear second-order elliptic problems.

Consider the model equation
(1)

(a1u)+a0u=f,−∇(a1∇u)+a0u=f,

with homogeneous Dirichlet boundary conditions
(2)
u(x)=0u(x)=0
on Ω.∂Ω.

Assuming aij(x)=a1(x)δijaij(x)=a1(x)δij, and b(x)=c(x)=0b(x)=c(x)=0.

Classical solution: The solution of (1) is a function uC2(Ω)C(Ω)u∈C2(Ω)∩C(Ω¯) satisfying the equation (1) everywhere in ΩΩ and fulfilling the boundary condition (2)
at every xΩx∈∂Ω.

C(Ω)C(Ω¯)={ϕC(Ω){ϕ∈C(Ω), ϕϕ is bounded and uniformly continuous function }}: because ΩΩ¯ is closed, the functions in closed sets are uniformly continuous.

Find the solution of (1) using the weak formulation:
  1. Multiply (1) with a test function vC0(Ω)v∈C0∞(Ω)
    (a1u)v+a0uv=fv,−∇(a1∇u)v+a0uv=fv,
  2. Integrate over ΩΩ
    Ω(a1u)vdx+Ωa0uvdx=Ωfvdx,∫Ω−∇(a1∇u)vdx+∫Ωa0uvdx=∫Ωfvdx,
  3. reduce the maximum order of the partial derivatives

       By Green's first identity:
    

    <f,Δx>Ω=<f,x>Ω<f,nx>Ω<f,−Δx>Ω=<∇f,∇x>Ω−<f,∂nx>∂Ω

    we have
    Ω(a1u)vdx=Ωa1uvdxΩvn(a1u)dS∫Ω−∇(a1∇u)vdx=∫Ω∇a1u∇vdx−∫∂Ωv∂n(a1u)dS

    The fact that vv vanishes on the boundary Ω removes the boundary term, then
    (3)
    Ωa1uvdx+Ωa0uvdx=Ωfvdx,∫Ω∇a1u∇vdx+∫Ωa0uvdx=∫Ωfvdx,
  4. Find the largest possible function spaces for u,vu,v s.t.all integrals are finite
    Originally, identity (3) was derived under very strong regularity assumptions uC2(Ω)C(Ω)u∈C2(Ω)∩C(Ω¯) and vC0(Ω)v∈C0∞(Ω).
    All integrals in (3) remain finite when these assumptions are weakened to
    u,vW1,20(Ω),fL2(Ω)u,v∈W01,2(Ω),f∈L2(Ω)

    Similarly the regularity assumptions for the coefficients a1a1 and a0a0 can be reduced to
    a1,a0L(Ω)a1,a0∈L∞(Ω)

    Note: Wm,p={uLP(Ω):D|a|uLP(Ω):0<|a|m}Wm,p={u∈LP(Ω):D|a|u∈LP(Ω):∀0<|a|≤m}
    Wm,p0W0m,p contains the functions in Wm,pWm,p that have compact support in ΩΩ.

The weak form of the problem (1)-(2) is stated as follows: Given fL2(Ω)f∈L2(Ω), find a
function uW1,20(Ω)u∈W01,2(Ω) such that

Ωa1uvdx+Ωa0uvdx=Ωfvdx,vW1,20(Ω)∫Ω∇a1u∇vdx+∫Ωa0uvdx=∫Ωfvdx,∀v∈W01,2(Ω)

Let V=W1,20(Ω)V=W01,2(Ω). We define a bilinear form a(.,.):V×VRa(.,.):V×V→R :

a(u,v)=Ωa1uvdx+Ωa0uvdxa(u,v)=∫Ω∇a1u∇vdx+∫Ωa0uvdx

and a linear form lVl∈V′
l(v)=<l,v>=Ωfvdxl(v)=<l,v>=∫Ωfvdx

Then the weak formulation of the problem (1), ( 2) reads: Find a function uVu∈V such that
a(u,v)=l(v)a(u,v)=l(v)

This notation is common in the study of partial differential equations and finite element methods.
Bilinear forms, energy norm, and energetic inner product

Every bilinear form a:V×VRa:V×V→R in a Banach space VV is associated with a unique linear operator A:VV defined by

(4)

(A(u))v=<Au,v>=a(u,v)(A(u))v=<Au,v>=a(u,v)

Note: (4) defines a one-to-one correspondence between continuous bilinear forms a:V×VRa:V×V→R and linear continuous operators A:VVA:V→V′.

Definition 1 (Energetic inner product, energy norm) Let VV be a Hilbert space and
a:V×VR a bounded symmetric V-elliptic bilinear form. The bilinear form defines
an inner product
(5)

(u,v)e=a(u,v)(u,v)e=a(u,v)

in VV , called energetic inner product. The norm induced by the energetic inner product,
(6)
ue=a(u,u)

is called energy norm.

Theorem 1.5 (Lax-Milgram lemma) Let VV be a Hilbert space, a:V×VR a bounded
V-elliptic bilinear form and lVl∈V′. Then there exists a unique solution to the problem

a(u,v)=l(v).a(u,v)=l(v).

Note: the unique solution is the unique represent of the linear form lVl∈V′ with respect to the energetic inner product(u,v)e=a(u,v)(u,v)e=a(u,v). In this sense the Lax-Milgram lemma is a special case of the Riesz representation theorem.


Riesz representation theorem.

This theorem establishes an important connection between a Hilbert space and its (continuous) dual space.

Let HH be a Hilbert space, and let H denote its dual space, consisting of all continuous linear functionals from HH into the field R or C . If xx is an element of H, then the function φxφx, for all yy in H defined by:

φx(y)=y,x,φx(y)=⟨y,x⟩,

where ,⟨⋅,⋅⟩ denotes the inner product of the Hilbert space, is an element of HH∗.

Note: The Riesz representation theorem states that every element of HH∗ can be written uniquely in this form. Given any continuous linear functional gHg∈H∗, the corresponding element xgHxg∈H can be constructed uniquely by xg=g(e1)e1+g(e2)e2+...xg=g(e1)e1+g(e2)e2+... , where {ei}{ei} is an orthonormal basis of HH, and the value ofxg does not vary by choice of basis. Thus, if yH,y=a1e1+a2e2+...yH,y=a1e1+a2e2+...y∈H,y=a1e1+a2e2+..., then g(y)=a1g(e1)+a2g(e2)+...=xg,yg(y)=a1g(e1)+a2g(e2)+...=⟨xg,y⟩.


Lemma 1. Assume that a1(x)Cmin>0a1(x)≥Cmin>0 and a0(x)0a0(x)≥0 a.e. in ΩΩ. Then the weak
problem has a unique solution uVu∈V .

Nonhomogeneous Dirichlet boundary conditions

More general Dirichlet boundary conditions
u(x)=g(x),xΩ

u(x)=g(x),x∈∂Ω

and gC(Ω)g∈C(∂Ω). Consider a new function GC2(Ω)C(¯Ω)G∈C2(Ω)∩C(Ω¯), so that g=Gg=G on Ω∂Ω.

Note: GG is not unique, but we will show later that the solution is invariant under its choice.

Denote u=G+U, then the problem (1) can be rewrited as finding UC20(Ω)
(a1(G+U))+a0(G+U)=f,xΩ

−∇(a1∇(G+U))+a0(G+U)=f,x∈Ω

s.t.
G+U=g,xΩ
G+U=g,x∈∂Ω

or, equivalently,
(a1U)+a0U=f+(a1G)a0G,xΩ
−∇(a1∇U)+a0U=f+∇(a1∇G)−a0G,x∈Ω

s.t.
U=0,xΩ
U=0,x∈∂Ω

Independence of the solution u=U+Gu=U+G on the Dirichlet lift GG:
Assume that
U1+GI=u1W1,20(Ω) and U2+G2=u2W1,20(Ω)U2+G2=u2∈W01,2(Ω) are two weak solutions.
So,
a(u1,v)a(u2,v)=0

a(u1,v)−a(u2,v)=0

set v=u1u2v=u1−u2, then
0=a(u1u2,u1u2)Cu1u2
0=a(u1−u2,u1−u2)≥C∥u1−u2∥

that is u1=u2u1=u2.

Neumann boundary conditions

Neumann boundary conditions of the form

uν=g,xΩ

∂u∂ν=g,x∈∂Ω

1.Assume uu∈ Multiply (1) with a test function vC0(Ω)v∈C0∞(Ω)
(a1u)v+a0uv=fv,

−∇(a1∇u)v+a0uv=fv,

2. Integrate over ΩΩ
Ω(a1u)vdx+Ωa0uvdx=Ωfvdx,
∫Ω−∇(a1∇u)vdx+∫Ωa0uvdx=∫Ωfvdx,

3. reduce the maximum order of the partial derivatives
       By Green's first identity:

<f,Δx>Ω=<f,x>Ω<f,nx>Ω

<f,−Δx>Ω=<∇f,∇x>Ω−<f,∂nx>∂Ω

we have
Ω(a1u)vdx=Ωa1uvdxΩvn(a1u)dS
∫Ω−∇(a1∇u)vdx=∫Ω∇a1u∇vdx−∫∂Ωv∂n(a1u)dS

As vv does not vanish on the boundary Ω, we have
Ωa1uvdx+Ωa0uvdxΩa1vuνdS=Ωfvdx,
∫Ω∇a1u∇vdx+∫Ωa0uvdx−∫∂Ωa1v∂u∂νdS=∫Ωfvdx,

Using the boundary condition, rewrite the problem with Neumann condition: given fL2(Ω)f∈L2(Ω), gL2(Ω)g∈L2(∂Ω), find uu s.t.
Ωa1uvdx+Ωa0uvdx=Ωfvdx+Ωa1vgdS,


4. Find the largest possible function spaces for u,vu,v s.t.all integrals are finite

All integrals remain finite when these assumptions are weakened to, as u,vu,v need not to vanish on the boundary
u,vW1,2(Ω),fL2(Ω)

u,v∈W1,2(Ω),f∈L2(Ω)

Remark (Essential and natural boundary conditions)
1. Dirichler boundary conditions are sometimes called essential since they essentially influence the weak formulation: They determine the function space in which the solution is sought.
2. Neumann boundary conditions do not influence the function space and can be naturally incorporated into the boundary integrals. Therefore they are called natural.

Newton (Robin) boundary conditions

boundary conditions involves a combination of function values and normal derivatives.


Energy of elliptic problems

In this part we introduce the explicit form of the abstract energy, at least for symmetric problems. The most important numerical scheme based on the minimization of the abstract energy.

Theorem. Let V be a linear space, a:V×VR a symmetric V-elliptic bilinear
form and lV. Then the functional of abstract energy,
(T.1)E(v)=12a(v,v)l(v)
attains its minimum in V at an element uV if and only if
(T.2)a(u,v)=l(v),vV
Moreover; the minimizer uV is unique.

Proof: Let (T.2) holds. Then

E(u+tv)=a(u+tv,u+tv)/2l(u+tv)=(u+tv,u+tv)e/2l(u+tv)=(u,u)e/2+t2(v,v)e/2+t(u,v)el(u)tl(v)

E(u+tv)=a(u+tv,u+tv)/2−l(u+tv)=(u+tv,u+tv)e/2−l(u+tv)=(u,u)e/2+t2(v,v)e/2+t(u,v)e−l(u)−tl(v)

E(u+tv)=E(u)+t2(v,v)e/2>E(u)

E(u+tv)=E(u)+t2(v,v)e/2>E(u)
.

if EE has a minimum at uV, then for every vVv∈V the derivative of the ϕ(t)=E(u+tv)ϕ(t)=E(u+tv) must vanish at 0
0=ϕ(0)=E(u+tv)t|t=0=(u,v)el(v)

0=ϕ′(0)=E(u+tv)∂t|t=0=(u,v)e−l(v)

…………

实现 Monte Carlo fPINNs 的代码需要一些准备工作。首先需要安装 TensorFlow、NumPy、SciPy 和 Matplotlib 库。其次需要准备数据,包括输入数据、输出数据以及模型参数。 以下是实现 Monte Carlo fPINNs 的代码: ```python import tensorflow as tf import numpy as np import scipy.io import matplotlib.pyplot as plt # Load data data = scipy.io.loadmat('data.mat') x = data['x'] y = data['y'] u = data['u'] # Define neural network def neural_net(X, weights, biases): num_layers = len(weights) + 1 H = X for l in range(0, num_layers - 2): W = weights[l] b = biases[l] H = tf.sin(tf.add(tf.matmul(H, W), b)) W = weights[-1] b = biases[-1] Y = tf.add(tf.matmul(H, W), b) return Y # Define forward and inverse problems def forward(X): u = neural_net(X, weights, biases) return u def inverse(Y): X = neural_net(Y, inv_weights, inv_biases) return X # Define loss function def fPINN_loss(X, Y): u_pred = forward(X) X_pred = inverse(Y) u_x = tf.gradients(u_pred, X)[0] u_xx = tf.gradients(u_x, X)[0] u_y = tf.gradients(u_pred, Y)[0] f = u_xx + u_y f_pred = forward(X_pred) loss = tf.reduce_mean(tf.square(u - u_pred)) + tf.reduce_mean(tf.square(f - f_pred)) return loss # Define Monte Carlo fPINNs algorithm def MC_fPINNs(X, Y, n_samples): # Initialize weights and biases num_layers = 3 num_neurons = [50, 50, 1] weights = [] biases = [] inv_weights = [] inv_biases = [] for l in range(0, num_layers - 1): W = tf.Variable(tf.random_normal([num_neurons[l], num_neurons[l+1]]), dtype=tf.float32) b = tf.Variable(tf.zeros([1, num_neurons[l+1]]), dtype=tf.float32) weights.append(W) biases.append(b) for l in range(0, num_layers - 1): W = tf.Variable(tf.random_normal([num_neurons[l], num_neurons[l+1]]), dtype=tf.float32) b = tf.Variable(tf.zeros([1, num_neurons[l+1]]), dtype=tf.float32) inv_weights.append(W) inv_biases.append(b) # Define optimizer optimizer = tf.train.AdamOptimizer() # Define training operation train_op = optimizer.minimize(fPINN_loss(X, Y)) # Define session sess = tf.Session() # Initialize variables init = tf.global_variables_initializer() sess.run(init) # Train model for i in range(n_samples): sess.run(train_op) if i % 100 == 0: loss = sess.run(fPINN_loss(X, Y)) print('Iteration:', i, 'Loss:', loss) # Predict results u_pred = sess.run(forward(X)) X_pred = sess.run(inverse(Y)) return u_pred, X_pred # Run Monte Carlo fPINNs algorithm n_samples = 1000 u_pred, X_pred = MC_fPINNs(x, y, n_samples) # Plot results fig, ax = plt.subplots() ax.plot(x, u, 'b-', label='Exact') ax.plot(x, u_pred, 'r--', label='Predicted') ax.set_xlabel('x') ax.set_ylabel('u') ax.legend() plt.show() ``` 在此代码中,我们首先加载数据(输入数据 x、输出数据 y 和真实值 u),然后定义神经网络模型,包括正向问题和反向问题。接下来,我们定义损失函数 fPINN_loss 和 Monte Carlo fPINNs 算法。在算法中,我们使用随机采样的方式进行训练,并且在每个迭代步骤中输出损失值。最后,我们运行算法并绘制预测结果。 需要注意的是,Monte Carlo fPINNs 算法的实现需要一定的计算资源,因此在实际应用中需要考虑计算效率和可扩展性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值