123

%% construct a network.
net.nIn=1;                              %the input layer has 1 ANN.
net.nHidden=10;                         %the hidden has 10 ANN.
net.nOut=1;                             %the output layer has 1ANN.
w=2*(rand(net.nHidden,net.nIn)-1/2);    %the weight coefficient of the hidden layer.
b=2*(rand(net.nHidden,1)-1/2);          %the threshold
net.w1=[w,b];                           %the weight coefficient and the threshold are linked up.
W=2*(rand(net.nOut,net.nHidden)-1/2);   %the weight coefficient of the output layer.
B=2*(rand(net.nOut,1)-1/2);             %the threshold
net.w2=[W,B];                           %the weight coefficient and the threshold are linked up.

%% set the parameters
mc=0.01;                                %set the momentum term
eta=0.001;                              %set the learining rate
maxiter=50000;                          %set the iteration times

%% set the training samples.
trainIn=[0:pi/4:2*pi];                  %the input of the training samples.
trainOut=sin(trainIn);                  %the ouput of the training samples.
trainnum=9;                             %the amount of the training samples.
SampIn=[trainIn;ones(1,trainnum)];      %the input of the network, and the input of the threshold is a constan 1.
expectedOut=trainOut;                   %the expected ouput is the output of the training samples.
errRec=zeros(1,maxiter);                %used to store the error of the training output.

%% set the testing samples
testIn=[0:pi/180:2*pi];                 %the input of the testing samples.
testOut=sin(testIn);                    %the output of the testing sanples.
testnum=361;                            %the amount of the testing samples.

%% the training procedure
for i=1:maxiter;
hid_input=net.w1*SampIn;                %calculate the weighting sum of the hidden layer
hid_out=tansig(hid_input);              %calculate the output of the hidden layer.
ou_input1=[hid_out;ones(1,trainnum)];   %the input of the output layer, and the input of the threshold is a constan 1.
ou_input2=net.w2*ou_input1;             %calculate the weighting sum of the output layer.
out_out=2*tansig(ou_input2);            %calculate the output of the output layer.
err=expectedOut-out_out;                %caiculate the error vector
sse=sumsqr(err);                        %calculate the square sum of the error.
errRec(i)=sse;                          %store the error

%% the back-propagation of error
DELTA=err.*dtansig(ou_input2,out_out/2);                    %the gradient of between the hidden layer and the output layer
delta=net.w2(:,1:end-1)'*DELTA.*dtansig(hid_input,hid_out); %the gradient of between the input layer and the hidden layer
dWEX=DELTA*ou_input1';                                      %the delta of the weight coefficient of the output layer
dwex=delta*SampIn';                                         %the delta of the weight coefficient of the hidden layer
if i==1                                                     %if it is the first time to revise the coefficient, we do not use the momentum term
    net.w2=net.w2+eta*dWEX;
    net.w1=net.w1+eta*dwex;
else                                                        %else we use the momentum term.
    net.w2=net.w2+(1-mc)*eta*dWEX+mc*dWEXOld;
    net.w1=net.w1+(1-mc)*eta*dwex+mc*dwexOld;
end
dWEXOld=dWEX;                                               %record the delta of the last revision
dwexOld=dwex;
end
%% the display of the results
subplot(1,2,1);
plot(errRec);                               %plot the error
title('error curve');
xlabel('iteration times');
ylabel('error');
realIn=[testIn;ones(1,testnum)];            %the input of the testing samples
realhid_input=net.w1*realIn;                %calculate the weighting sum of the hidden layer
realhid_out=tansig(realhid_input);          %calculate the output of the hidden layer.
realou_input1=[realhid_out;ones(1,testnum)];%the input of the output layer, and the input of the threshold is a constan 1.
realou_input2=net.w2*realou_input1;         %calculate the weighting sum of the output layer.
realout_out=2*tansig(realou_input2);        %calculate the output of the output layer.
realerr=testOut-realout_out;                %caiculate the error vector
realsse=sumsqr(realerr);                    %calculate the square sum of the error.

subplot(1,2,2);
plot(testIn,realout_out,testIn,sin(testIn));%plot the standard sin and the output of the testing.
axis([0 2*pi -1.1 1.1]);                    %set the coordinate range.
set(gca,'XTick',pi/4:pi/4:2*pi);
grid on;
title('the testing output and the standard output');


09-17
数字123在计算机中所占的字节数取决于其数据类型和存储方式。以下是几种常见编程语言中不同数据类型下数字123所占的字节数: ### Python 在Python中,整数类型 `int` 可以动态调整大小以适应不同大小的整数。因此,数字123在Python中所占的字节数会根据具体情况而定。可以使用 `sys.getsizeof()` 函数来获取对象的字节大小: ```python import sys num = 123 print(sys.getsizeof(num)) ``` 通常,小整数(如123)在Python中会占用28个字节。 ### C语言 在C语言中,数字123可以使用不同的数据类型来存储,不同数据类型所占的字节数不同: - `char` 类型通常占1个字节,不过 `char` 一般用于存储字符,若用其存储整数,范围有限。若要存储123,只要不超出其表示范围(-128 到 127 或 0 到 255,取决于是否有符号),则占1个字节。 ```c #include <stdio.h> int main() { char num = 123; printf("Size of char: %zu bytes\n", sizeof(num)); return 0; } ``` - `short` 类型一般占2个字节,能表示更大范围的整数。 ```c #include <stdio.h> int main() { short num = 123; printf("Size of short: %zu bytes\n", sizeof(num)); return 0; } ``` - `int` 类型在大多数系统中占4个字节。 ```c #include <stdio.h> int main() { int num = 123; printf("Size of int: %zu bytes\n", sizeof(num)); return 0; } ``` - `long` 类型在32位系统中通常占4个字节,在64位系统中占8个字节。 ```c #include <stdio.h> int main() { long num = 123; printf("Size of long: %zu bytes\n", sizeof(num)); return 0; } ``` ### Java 在Java中,整数类型有明确的字节大小: - `byte` 类型占1个字节,范围是 -128 到 127,123 可以用 `byte` 类型存储。 ```java public class Main { public static void main(String[] args) { byte num = 123; System.out.println("Size of byte: " + Byte.BYTES + " bytes"); } } ``` - `short` 类型占2个字节。 ```java public class Main { public static void main(String[] args) { short num = 123; System.out.println("Size of short: " + Short.BYTES + " bytes"); } } ``` - `int` 类型占4个字节。 ```java public class Main { public static void main(String[] args) { int num = 123; System.out.println("Size of int: " + Integer.BYTES + " bytes"); } } ``` - `long` 类型占8个字节。 ```java public class Main { public static void main(String[] args) { long num = 123; System.out.println("Size of long: " + Long.BYTES + " bytes"); } } ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值