linear regression example

本文通过Torch7框架实现了一个简单的线性回归模型,该模型能够基于施肥量和杀虫剂量预测玉米产量。使用均方误差作为损失函数,并通过随机梯度下降进行训练。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

----------------------------------------------------------------------
-- example-linear-regression.lua
-- 
-- This script provides a very simple step-by-step example of
-- linear regression, using Torch7's neural network (nn) package,
-- and the optimization package (optim).
--

-- note: to run this script, simply do:
-- torch script.lua

-- to run the script, and get an interactive shell once it terminates:
-- torch -i script.lua

-- we first require the necessary packages.
-- note: optim is a 3rd-party package, and needs to be installed
-- separately. This can be easily done using Torch7's package manager:
-- torch-pkg install optim

require 'torch'
require 'optim'
require 'nn'


----------------------------------------------------------------------
-- 1. Create the training data

-- In all regression problems, some training data needs to be 
-- provided. In a realistic scenarios, data comes from some database
-- or file system, and needs to be loaded from disk. In that 
-- tutorial, we create the data source as a Lua table.

-- In general, the data can be stored in arbitrary forms, and using
-- Lua's flexible table data structure is usually a good idea. 
-- Here we store the data as a Torch Tensor (2D Array), where each
-- row represents a training sample, and each column a variable. The
-- first column is the target variable, and the others are the
-- input variables.

-- The data are from an example in Schaum's Outline:
-- Dominick Salvator and Derrick Reagle
-- Shaum's Outline of Theory and Problems of Statistics and Economics
-- 2nd edition
-- McGraw-Hill
-- 2002

-- The data relate the amount of corn produced, given certain amounts
-- of fertilizer and insecticide. See p 157 of the text.

-- In this example, we want to be able to predict the amount of
-- corn produced, given the amount of fertilizer and intesticide used.
-- In other words: fertilizer & insecticide are our two input variables,
-- and corn is our target value.

--  {corn, fertilizer, insecticide}
data = torch.Tensor{
   {40,  6,  4},
   {44, 10,  4},
   {46, 12,  5},
   {48, 14,  7},
   {52, 16,  9},
   {58, 18, 12},
   {60, 22, 14},
   {68, 24, 20},
   {74, 26, 21},
   {80, 32, 24}
}


----------------------------------------------------------------------
-- 2. Define the model (predictor)

-- The model will have one layer (called a module), which takes the 
-- 2 inputs (fertilizer and insecticide) and produces the 1 output 
-- (corn).

-- Note that the Linear model specified below has 3 parameters:
--   1 for the weight assigned to fertilizer
--   1 for the weight assigned to insecticide
--   1 for the weight assigned to the bias term

-- In some other model specification schemes, one needs to augment the
-- training data to include a constant value of 1, but this isn't done
-- with the linear model.

-- The linear model must be held in a container. A sequential container
-- is appropriate since the outputs of each module become the inputs of 
-- the subsequent module in the model. In this case, there is only one
-- module. In more complex cases, multiple modules can be stacked using
-- the sequential container.

-- The modules are all defined in the neural network package, which is
-- named 'nn'.

model = nn.Sequential()                 -- define the container
ninputs = 2; noutputs = 1
model:add(nn.Linear(ninputs, noutputs)) -- define the only module


----------------------------------------------------------------------
-- 3. Define a loss function, to be minimized.

-- In that example, we minimize the Mean Square Error (MSE) between
-- the predictions of our linear model and the groundtruth available
-- in the dataset.

-- Torch provides many common criterions to train neural networks.

criterion = nn.MSECriterion()


----------------------------------------------------------------------
-- 4. Train the model

-- To minimize the loss defined above, using the linear model defined
-- in 'model', we follow a stochastic gradient descent procedure (SGD).

-- SGD is a good optimization algorithm when the amount of training data
-- is large, and estimating the gradient of the loss function over the 
-- entire training set is too costly.

-- Given an arbitrarily complex model, we can retrieve its trainable
-- parameters, and the gradients of our loss function wrt these 
-- parameters by doing so:

x, dl_dx = model:getParameters()

-- In the following code, we define a closure, feval, which computes
-- the value of the loss function at a given point x, and the gradient of
-- that function with respect to x. x is the vector of trainable weights,
-- which, in this example, are all the weights of the linear matrix of
-- our model, plus one bias.

feval = function(x_new)
   -- set x to x_new, if differnt
   -- (in this simple example, x_new will typically always point to x,
   -- so the copy is really useless)
   if x ~= x_new then
      x:copy(x_new)
   end

   -- select a new training sample
   _nidx_ = (_nidx_ or 0) + 1
   if _nidx_ > (#data)[1] then _nidx_ = 1 end

   local sample = data[_nidx_]
   local target = sample[{ {1} }]      -- this funny looking syntax allows
   local inputs = sample[{ {2,3} }]    -- slicing of arrays.

   -- reset gradients (gradients are always accumulated, to accomodate 
   -- batch methods)
   dl_dx:zero()

   -- evaluate the loss function and its derivative wrt x, for that sample
   local loss_x = criterion:forward(model:forward(inputs), target)
   model:backward(inputs, criterion:backward(model.output, target))

   -- return loss(x) and dloss/dx
   return loss_x, dl_dx
end

-- Given the function above, we can now easily train the model using SGD.
-- For that, we need to define four key parameters:
--   + a learning rate: the size of the step taken at each stochastic 
--     estimate of the gradient
--   + a weight decay, to regularize the solution (L2 regularization)
--   + a momentum term, to average steps over time
--   + a learning rate decay, to let the algorithm converge more precisely

sgd_params = {
   learningRate = 1e-3,
   learningRateDecay = 1e-4,
   weightDecay = 0,
   momentum = 0
}

-- We're now good to go... all we have left to do is run over the dataset
-- for a certain number of iterations, and perform a stochastic update 
-- at each iteration. The number of iterations is found empirically here,
-- but should typically be determinined using cross-validation.

-- we cycle 1e4 times over our training data
for i = 1,1e4 do

   -- this variable is used to estimate the average loss
   current_loss = 0

   -- an epoch is a full loop over our training data
   for i = 1,(#data)[1] do

      -- optim contains several optimization algorithms. 
      -- All of these algorithms assume the same parameters:
      --   + a closure that computes the loss, and its gradient wrt to x, 
      --     given a point x
      --   + a point x
      --   + some parameters, which are algorithm-specific

      _,fs = optim.sgd(feval,x,sgd_params)

      -- Functions in optim all return two things:
      --   + the new x, found by the optimization method (here SGD)
      --   + the value of the loss functions at all points that were used by
      --     the algorithm. SGD only estimates the function once, so
      --     that list just contains one value.

      current_loss = current_loss + fs[1]
   end

   -- report average error on epoch
   current_loss = current_loss / (#data)[1]
   print('current loss = ' .. current_loss)

end


----------------------------------------------------------------------
-- 5. Test the trained model.

-- Now that the model is trained, one can test it by evaluating it
-- on new samples.

-- The text solves the model exactly using matrix techniques and determines
-- that 
--   corn = 31.98 + 0.65 * fertilizer + 1.11 * insecticides

-- We compare our approximate results with the text's results.

text = {40.32, 42.92, 45.33, 48.85, 52.37, 57, 61.82, 69.78, 72.19, 79.42}

print('id  approx   text')
for i = 1,(#data)[1] do
   local myPrediction = model:forward(data[i][{{2,3}}])
   print(string.format("%2d  %6.2f %6.2f", i, myPrediction[1], text[i]))
end
### 使用 `sklearn` 中的 `LinearRegression` 进行线性回归分析 #### 什么是线性回归? 线性回归是一种用于解决回归问题的方法,它假设目标值与特征之间存在线性关系。这种关系可以通过一个多元一次方程表示,其形式通常为 \( y = w_1x_1 + w_2x_2 + ... + b \),其中 \( w_i \) 是权重参数,\( b \) 是偏置项[^1]。 #### 如何导入和初始化 `LinearRegression` 要使用 `sklearn` 的 `LinearRegression` 类,首先需要从 `sklearn.linear_model` 模块中导入该类,并实例化对象: ```python from sklearn.linear_model import LinearRegression model = LinearRegression() ``` 此操作会创建一个默认配置的线性回归模型对象[^3]。 #### 数据准备 为了训练模型,需要准备好输入特征矩阵 \( X \) 和目标变量向量 \( y \)。这些数据可以来自 CSV 文件或其他数据源。例如,读取 CSV 文件并提取所需列作为特征和标签: ```python import pandas as pd data = pd.read_csv('example_data.csv') X = data[['feature1', 'feature2']] # 特征 y = data['target'] # 目标 ``` 如果数据集中有缺失值,则需先处理它们,比如删除含有空值的行或填充数值: ```python df_cleaned = data.dropna(subset=['feature1', 'feature2', 'target']) X = df_cleaned[['feature1', 'feature2']] y = df_cleaned['target'] ``` 上述代码片段展示了如何清理数据集以移除任何可能影响建模过程的 NaN 值。 #### 训练模型 一旦数据被整理好,就可以调用 `.fit()` 方法来训练模型: ```python model.fit(X, y) ``` 这一步骤会让算法找到最佳拟合直线所对应的参数 \( w \) 和 \( b \)。 #### 获取模型参数 训练完成后,可通过访问属性获得模型的学习到的系数以及截距: ```python coefficients = model.coef_ intercept = model.intercept_ print(f"Coefficients: {coefficients}") print(f"Intercept: {intercept}") ``` 这里返回的结果分别对应于各个自变量前的斜率(即权重),还有常数项(即截距)。对于多维情况下的简单例子来说,输出可能是这样的:Coefficients: [0.97654321], Intercept: [-0.5][^3]。 #### 预测新样本 有了已训练完成的模型之后,便能够基于新的观测值做出预测: ```python new_samples = [[value_feature1, value_feature2]] predictions = model.predict(new_samples) print(predictions) ``` 此处的新样品应具有相同的维度结构如原始训练资料那样排列组合而成;而 predict 函数则依据先前学到的关系给出估计结果。 #### 可视化一元线性回归 当仅有一个独立变量时,还可以绘制散点图连同拟合后的趋势线一起显示出来以便直观理解两者间联系程度: ```python import matplotlib.pyplot as plt plt.scatter(data['salary'], data['record']) # 绘制实际观察点 plt.plot(data['salary'], model.predict(X), color='green') # 添加拟合曲线 plt.xlabel("Salary") plt.ylabel("Record") plt.title("Linear Regression Analysis") plt.grid(True) plt.legend(['Fitted Line', 'Data Points']) plt.savefig("linear_regression_plot.png", dpi=300) plt.show() plt.close() ``` 以上脚本生成了一张包含所有原始数据点及其上覆盖的最佳匹配直线图形[^2]。 --- ### 总结 综上所述,在 Python 编程环境下运用 scikit-learn 库里的 linear regression 工具实现数据分析任务非常便捷高效。只需经过几个基本步骤——加载必要的模块、定义合适的输入/输出数组或者 DataFrame 列表、建立 instance 并执行 fit operation 即可快速得到初步结论^. ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值