C#基于ScottPlot进行可视化

前言

上一篇文章跟大家分享了用NumSharp实现简单的线性回归,但是没有进行可视化,可能对拟合的过程没有直观的感受,因此今天跟大家介绍一下使用C#基于Scottplot进行可视化,当然Python的代码,我也会同步进行可视化。

Python代码进行可视化

Python代码用matplotlib做了可视化,我就不具体介绍了。

修改之后的python代码如下:

#The optimal values of m and b can be actually calculated with way less effort than doing a linear regression. 
#this is just to demonstrate gradient descent

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation


# y = mx + b
# m is slope, b is y-intercept
def compute_error_for_line_given_points(b, m, points):
    totalError = 0
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        totalError += (y - (m * x + b)) ** 2
    return totalError / float(len(points))

def step_gradient(b_current, m_current, points, learningRate):
    b_gradient = 0
    m_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        b_gradient += -(2/N) * (y - ((m_current * x) + b_current))
        m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current))
    new_b = b_current - (learningRate * b_gradient)
    new_m = m_current - (learningRate * m_gradient)
    return [new_b, new_m]

def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
    b = starting_b
    m = starting_m
    args_data = []
    for i in range(num_iterations):
        b, m = step_gradient(b, m, np.array(points), learning_rate)
        args_data.append((b,m))
    return args_data

if __name__ == '__main__':
     points = np.genfromtxt("data.csv", delimiter=",")
     learning_rate = 0.0001
     initial_b = 0 # initial y-intercept guess
     initial_m = 0 # initial slope guess
     num_iterations = 10
     print ("Starting gradient descent at b = {0}, m = {1}, error = {2}".format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points)))
     print ("Running...")
     args_data = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
     
     b = args_data[-1][0]
     m = args_data[-1][1]

     print ("After {0} iterations b = {1}, m = {2}, error = {3}".format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points)))
    
     data = np.array(points).reshape(100,2)
     x1 = data[:,0]
     y1 = data[:,1]
     
     x2 = np.linspace(20, 80, 100)
     y2 = initial_m * x2 + initial_b

     data2 = np.array(args_data)
     b_every = data2[:,0]
     m_every = data2[:,1]

     # 创建图形和轴
     fig, ax = plt.subplots()
     line1, = ax.plot(x1, y1, 'ro')
     line2, = ax.plot(x2,y2)

     # 添加标签和标题
     plt.xlabel('x')
     plt.ylabel('y')
     plt.title('Graph of y = mx + b')

     # 添加网格
     plt.grid(True)

    # 定义更新函数
     def update(frame):
        line2.set_ydata(m_every[frame] * x2 + b_every[frame])
        ax.set_title(f'{
     
     frame} Graph of y = {
     
     m_every[frame]:.2f}x + {
     
     b_every[frame]:.2f}')
    
# 创建动画
animation = FuncAnimation(fig, update, frames=len(data2), interval=500)

# 显示动画
plt.show()

实现的效果如下所示:

python代码的可视化

image-20240113200232614

C#代码进行可视化

这是本文重点介绍的内容,本文的C#代码通过Scottplot进行可视化。

Scottplot简介

ScottPlot 是一个免费的开源绘图库,用于 .NET,可以轻松以交互方式显示大型数据集。

控制台程序可视化

首先我先介绍一下在控制台程序中进行可视化。

首先添加Scottplot包:

image-20240113201207374

将上篇文章中的C#代码修改如下:

using NumSharp;

namespace LinearRegressionDemo
{
   
   
    internal class Program
    {
   
       
        static void Main(string[] args)
        {
   
      
            //创建double类型的列表
            List<double> Array = new List<double>();
            List<double> ArgsList = new List<double>();

            // 指定CSV文件的路径
            string filePath = "你的data.csv路径";

            // 调用ReadCsv方法读取CSV文件数据
            Array = ReadCsv(filePath);

            var array = np.array(Array).reshape(100,2);

            double learning_rate = 0.0001;
            double initial_b = 0;
            double initial_m = 0;
            double num_iterations = 10;

            Console.WriteLine($"Starting gradient descent at b = {
     
     initial_b}, m = {
     
     initial_m}, error = {
     
     compute_error_for_line_given_points(initial_b, initial_m, array)}");
            Console.WriteLine("Running...");
            ArgsList = gradient_descent_runner(array, initial_b, initial_m, learning_rate, num_iterations);
            double b = ArgsList[ArgsList.Count - 2];
            double m = ArgsList[ArgsList.Count - 1];
            Console.WriteLine($"After {
     
     num_iterations} iterations b = {
     
     b}, m = {
     
     m}, error = {
     
     compute_error_for_line_given_points(b, m, array)}");
            Console.ReadLine();

            var x1 = array[$":", 0];
            
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值