【FZU-MU】EE414 Assignment2

这篇博客详细解答了EE414课程的多项式插值问题,包括拉格朗日插值、牛顿插值等方法。通过对Q1到Q5的解答,讨论了不同模型的精度与计算复杂度,指出在需要高效计算时,牛顿模型优于拉格朗日模型,但在追求精确性时,拉格朗日模型更优。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

EE414_Assignment2

Q1

First we can see the normal equations are:
( ∑ x i ) c + ( ∑ x i 2 ) m = ∑ x i y i n c + ( ∑ x i ) m = ∑ y i \left( \sum x_i \right) c + \left( \sum x^2_i \right) m = \sum x_i y_i \\ n c + \left( \sum x_i \right) m = \sum y_i \\ (xi)c+(xi2)m=xiyinc+(xi)m=yi
And the draw the rgresion table:

i i i x i x_i xi y i y_i yi x i 2 x^2_i xi2 x i y i x_iy_i xiyi
0-95-708902567260
1-55-483302526565
2-16-1322562112
3242545766096
∑ \sum -142-106912882102033

So we can get:
{ − 142 c + 12882 m = 102033 4 c − 142 m = − 1069 ⇒ { c = 22.887 m = 8.173 F ( x ) = 8.173 x + 22.887 y ^ = f ( 2 ) = 8.173 × 2 + 22.887 = 39.233 \left\{ \begin{aligned} -142 c + 12882 m &= 102033\\ 4 c - 142 m &= -1069 \\ \end{aligned} \right. \Rightarrow \left\{ \begin{aligned} c &= 22.887\\ m &= 8.173 \\ \end{aligned} \right. \\ F(x) = 8.173x + 22.887 \\ \hat{y} = f(2) = 8.173\times 2 + 22.887 = 39.233 {142c+12882m4c142m=102033=1069{cm=22.887=8.173F(x)=8.173x+22.887y^=f(2)=8.173×2+22.887=39.233

Q2

First we draw the divde difernce table:

i i i x i x_i xi F ( x i ) F(x_i) F(xi) 1 s t D D 1^{st}DD 1stDD 2 n d D D 2^{nd}DD 2ndDD 3 r d D D 3^{rd}DD 3rdDD
0-95-7085.6250.04272 − 2.8986 × 1 0 − 4 -2.8986\times 10^{-4} 2.8986×104
1-55-48390.00823
2-16-1329.65
324254

在这里插入图片描述

So we can get:
P 3 ( x ) = − 708 + 5.625 ( x + 95 ) + 0.04272 ( x + 95 ) ( x + 55 ) − 2.8986 × 1 0 − 4 ( x + 95 ) ( x + 55 ) ( x + 16 ) y ^ = P 3 ( 2 ) ≈ 44.9764 \begin{aligned} P_3(x) &= -708 + 5.625(x+95) + 0.04272(x+95)(x+55) -2.8986\times 10^{-4}(x+95)(x+55)(x+16) \\ \hat{y} &= P_3(2) \approx 44.9764 \\ \end{aligned} P3(x)y^=708+5.625(x+95)+0.04272(x+95)(x+55)2.8986×104(x+95)(x+55)(x+16)=P3(2)44.9764

Q3

First we draw the datable:

i i i x i x_i xi y i y_i yi
0-95-708
1-55-483
2-16-132
324254

Because l i = ∏ j = 0 , j ≠ i n x − x j x i − x j l_i = \prod_{j=0,j\ne i}^n \frac{x-x_j}{x_i-x_j} li=j=0,j=inxixjxxj, we can caluate:
l 0 = ( x − x 1 ) ( x − x 2 ) ( x − x 3 ) ( x 0 − x 1 ) ( x 0 − x 2 ) ( x 0 − x 3 ) = ( x + 55 ) ( x + 16 ) ( x − 24 ) ( − 95 + 55 ) ( − 95 + 16 ) ( − 95 − 24 ) l 1 = ( x − x 0 ) ( x − x 2 ) ( x − x 3 ) ( x 1 − x 0 ) ( x 1 − x 2 ) ( x 1 − x 3 ) = ( x + 95 ) ( x + 16 ) ( x − 24 ) ( − 55 + 95 ) ( − 55 + 16 ) ( − 55 − 24 ) l 2 = ( x − x 0 ) ( x − x 1 ) ( x − x 3 ) ( x 2 − x 0 ) ( x 2 − x 1 ) ( x 2 − x 3 ) = ( x + 95 ) ( x + 55 ) ( x − 24 ) ( − 16 + 95 ) ( − 16 + 55 ) ( − 16 − 24 ) l 3 = ( x − x 0 ) ( x − x 1 ) ( x − x 2 ) ( x 3 − x 0 ) ( x 3 − x 1 ) ( x 3 − x 2 ) = ( x + 95 ) ( x + 55 ) ( x + 16 ) ( 24 + 95 ) ( 24 + 55 ) ( 24 + 16 ) P 3 ( x ) = y 0 l 0 + y 1 l 1 + y 2 l 2 + y 3 l 3 = − 708 − 376040 ( x + 55 ) ( x + 16 ) ( x − 24 ) + − 483 123240 ( x + 95 ) ( x + 16 ) ( x − 24 ) + − 132 − 123240 ( x + 95 ) ( x + 55 ) ( x − 24 ) + 254 376040 ( x + 95 ) ( x + 55 ) ( x + 16 ) y ^ = P 3 ( 2 ) ≈ 44.9846 \begin{aligned} l_0 &= \frac{(x-x_1)(x-x_2)(x-x_3)}{(x_0-x_1)(x_0-x_2)(x_0-x_3)} = \frac{(x+55)(x+16)(x-24)}{(-95+55)(-95+16)(-95-24)} \\ l_1 &= \frac{(x-x_0)(x-x_2)(x-x_3)}{(x_1-x_0)(x_1-x_2)(x_1-x_3)} = \frac{(x+95)(x+16)(x-24)}{(-55+95)(-55+16)(-55-24)} \\ l_2 &= \frac{(x-x_0)(x-x_1)(x-x_3)}{(x_2-x_0)(x_2-x_1)(x_2-x_3)} = \frac{(x+95)(x+55)(x-24)}{(-16+95)(-16+55)(-16-24)} \\ l_3 &= \frac{(x-x_0)(x-x_1)(x-x_2)}{(x_3-x_0)(x_3-x_1)(x_3-x_2)} = \frac{(x+95)(x+55)(x+16)}{(24+95)(24+55)(24+16)} \\ P_3(x) &= y_0 l_0 + y_1 l_1 + y_2 l_2 + y_3 l_3 \\ &= \frac{-708}{-376040}(x+55)(x+16)(x-24) + \frac{-483}{123240}(x+95)(x+16)(x-24) + \frac{-132}{-123240}(x+95)(x+55)(x-24) + \frac{254}{376040}(x+95)(x+55)(x+16) \\ \hat{y} &= P_3(2) \approx 44.9846 \\ \end{aligned} l0l1l2l3P3(x)y^=(x0x1)(x0x2)(x0x3)(xx1)(xx2)(xx3)=(95+55)(95+16)(9524)(x+55)(x+16)(x24)=(x1x0)(x1x2)(x1x3)(xx0)(xx2)(xx3)=(55+95)(55+16)(5524)(x+95)(x+16)(x24)=(x2x0)(x2x1)(x2x3)(xx0)(xx1)(xx3)=(16+95)(16+55)(1624)(x+95)(x+55)(x24)=(x3x0)(x3x1)(x3x2)(xx0)(xx1)(xx2)=(24+95)(24+55)(24+16)(x+95)(x+55)(x+16)=y0l0+y1l1+y2l2+y3l3=376040708(x+55)(x+16)(x24)+123240483(x+95)(x+16)(x24)+123240132(x+95)(x+55)(x24)+376040254(x+95)(x+55)(x+16)=P3(2)44.9846

Q4

\quad First of al, in the three models, the y value caluated by the Lagrange model is very close to the true value, and when x equals 2, the estimated y ^ \hat{y} y^ value of the models is also very accurate. Howevr, the caluation amount of the Lagrange model is much larger than that of other two models.
\quad Then, in the the Newton’s models, the y value caluated is very close to the true value in the former data, but it deviates from true value quickly in the later data. When x equals 2, the estimated y ^ \hat{y} y^ value of the models is very different with truth. Howevr, the caluation amount of the Newton’s model is much easier than Lagrange model.
\quad So I think when you need estimated y ^ \hat{y} y^ value in smaller amount, the Newton’s model is better. But if you need a accurate model, the Lagrange mode is the best.

x = -100: 25; %%最低,步长,最高
y1 = 8.173*x + 22.887;
y2 = -708 + 5.625.*(x+95) + 0.04272.*(x+95).*(x+55)  -2.8986.* 10^-4.*(x+95).*(x+55).*(x+16);
y3 = 708.*(x+55).*(x+16).*(x-24)./376040 - 483.*(x+95).*(x+16).*(x-24)./123240 + 132.*(x+95).*(x+55).*(x-24)./123240 + 254.*(x+95).*(x+55).*(x+16)./ 376040;
figure %%新建一个幕布
hold on;
plot(x, y1,'-');
plot(x, y2,'--');
plot(x, y3,'-.');
hold on;
scatter(-95, -708,'o','filled','b');
scatter(-55, -483,'o','filled','b');
scatter(-16, -132,'o','filled','b');
scatter(24, 254,'o','filled','b');
scatter(2, 39.233,'x','r');
scatter(2, 44.9764,'*','r');
scatter(2, 44.9846,'o','r');
title('Q4')%%加上标题
xlabel('x')%%x标签
ylabel('y')%%y标签

Q5

The data point is:

px = [3, 3.4, 3.8, 4.2, 4.6, 5, 5.4, 5.8, 6.2, 6.6, 7, 7.4, 7.8, 8.2];
py = [3.5, 2.4, 0.55, -0.91, -0.41, -1.3, -1.1, -1.7, -0.16, 0.17, -0.03, 1.7, 4.4, 6.2];
figure %%新建一个幕布
hold on;
for i = 1:14
    scatter(px(i), py(i),'o','filled','b');
end
title('Q5')  %% 加上标题
xlabel('x')  %% x标签
ylabel('y')  %% y标签

\quad Follow the observe of these items of data, I chose degree 13-order polynomial to regress these data.
\quad So I use matlb to achiev this:

px = [3, 3.4, 3.8, 4.2, 4.6, 5, 5.4, 5.8, 6.2, 6.6, 7, 7.4, 7.8, 8.2];
py = [3.5, 2.4, 0.55, -0.91, -0.41, -1.3, -1.1, -1.7, -0.16, 0.17, -0.03, 1.7, 4.4, 6.2];
len = 541;
x = linspace(2.9,8.3,len);
y = linspace(0,0,len);
    for i = 1:14
        z = linspace(1,1,len);
        for j = 1:14
            if j ~= i
                for k = 1:len
                    z(k) = z(k) * (x(k)-px(j))/(px(i)-px(j));
                end
            else
                continue;
            end
        end
        for k = 1:len
            y(k) = y(k) + py(i) * z(k);
        end
    end
figure  %% 新建一个幕布
plot(x,y);
hold on;
for i = 1:14
    scatter(px(i), py(i),'o','filled','b');
end
title('Q5')  %% 加上标题
xlabel('x')  %% x标签
ylabel('y')  %% y标签

Conclusion:
\quad Since there is 14 points, so when we choose the Lagrange module, the polynomial is 13-order. The results are found to be good through matlab simulation, reaching the expected results.

Q5 Method.2

px = [3, 3.4, 3.8, 4.2, 4.6, 5, 5.4, 5.8, 6.2, 6.6, 7, 7.4, 7.8, 8.2];
py = [3.5, 2.4, 0.55, -0.91, -0.41, -1.3, -1.1, -1.7, -0.16, 0.17, -0.03, 1.7, 4.4, 6.2];
len = 541;  %% 给定画图x的取值个数
x = linspace(2.9,8.3,len);  %% 绘图x

r = length(px);
P = zeros(r,r);  %% 制表
for i = 1:r
    P(i,1) = py(i);
end
for i = 1: r-1  %% DD阶数
    for j = 1: r-i
        P(j,i+1) = (P(j+i,i)-P(j,i)) / (px(j+i)-px(j));  %% 得到i阶DD
    end
end

e_y = ones(r,len);  %% 建立计算的一组基
for i = 1: r-1
    e_y(i+1,:) = e_y(i,:).*(x-px(i));  %% 下一组基等于前一组乘以x-p(i)
end

y = zeros(1,len);   %% x对应的y值
for i = 1: r
    y = y + P(1,i).*e_y(i,:);   %% 线表计算结果
end
%% y = P(1,:)*e_y;  %% x对应的y值,一步到位 

figure  %% 新建一个幕布
plot(x,y);
hold on;
for i = 1: r
    scatter(px(i), py(i),'o','filled','b');
end
title('Q5')  %% 加上标题
xlabel('x')  %% x标签
ylabel('y')  %% y标签

该代码Q5的数据拟合度很差,但该方法对应Q2的拟合度是很好的。我认为Method2在一定范围内是拟合度很高的,相较于拉格朗日插值法节省了相当多的计算过程。当数据点较多时,个人强烈推荐拉格朗日插值法来拟合数据。

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

数学:人类精神虐待(゚Д゚)ノ

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值