ML JAVA VERSION -01

import java.util.Random;

public class Perceptron {

    private double[] weights;
    private double learnc;  // learning rate
    private double bias;

    public Perceptron(int inputSize, double learningRate) {
        this.learnc = learningRate;
        this.bias = 1.0;
        this.weights = new double[inputSize + 1]; // +1 for bias weight

        Random rand = new Random();
        for (int i = 0; i < weights.length; i++) {
            weights[i] = rand.nextDouble() * 2 - 1; // Random between -1 and 1
        }
    }

    public int activate(double[] inputs) {
        double sum = 0.0;
        for (int i = 0; i < inputs.length; i++) {
            sum += inputs[i] * weights[i];
        }
        // Simple step function
        return sum >= 0 ? 1 : 0;
    }

    public void train(double[] inputs, int desired) {
        double[] extendedInputs = new double[inputs.length + 1];
        System.arraycopy(inputs, 0, extendedInputs, 0, inputs.length);
        extendedInputs[inputs.length] = this.bias;  // Add bias input

        int guess = activate(extendedInputs);
        int error = desired - guess;

        if (error != 0) {
            for (int i = 0; i < extendedInputs.length; i++) {
                weights[i] += this.learnc * error * extendedInputs[i];
            }
        }
    }

    public void printWeights() {
        for (int i = 0; i < weights.length; i++) {
            System.out.println("Weight[" + i + "] = " + weights[i]);
        }
    }
}
public class Main {
    public static void main(String[] args) {
        Perceptron p = new Perceptron(2, 0.01);

        // Simple training data for AND logic
        double[][] inputs = {
            {0, 0},
            {0, 1},
            {1, 0},
            {1, 1}
        };
        int[] labels = {0, 0, 0, 1}; // AND gate output

        // Train the perceptron
        for (int epoch = 0; epoch < 100; epoch++) {
            for (int i = 0; i < inputs.length; i++) {
                p.train(inputs[i], labels[i]);
            }
        }

        // Test
        for (int i = 0; i < inputs.length; i++) {
            double[] input = {inputs[i][0], inputs[i][1]};
            double[] extended = {input[0], input[1], 1.0}; // add bias
            int output = p.activate(extended);
            System.out.println("Input: " + input[0] + ", " + input[1] + " => Output: " + output);
        }

        p.printWeights();
    }
}

This Java class Perceptron is a simple implementation of the Perceptron algorithm, one of the foundational models in machine learning used for binary classification (i.e., classifying into two classes, like 0 or 1).


📦 Class Overview

public class Perceptron {

This is a single-layer Perceptron with:

  • A vector of weights (one for each input, plus one for the bias),

  • A learning rate (learnc),

  • A constant bias term.


🧱 Fields (Variables)

private double[] weights;
private double learnc;  // learning rate
private double bias;
  • weights: Array of weights, one for each input + 1 for the bias.

  • learnc: The learning rate (controls how big weight updates are).

  • bias: A constant (typically 1) that allows the decision boundary to shift.


🏗 Constructor

public Perceptron(int inputSize, double learningRate)
  • Initializes the Perceptron with random weights between -1 and 1.

  • inputSize + 1 weights to account for the bias input.

  • learningRate is stored in learnc.


⚙️ activate Method

public int activate(double[] inputs)
  • Computes the weighted sum of the inputs:

    sum=∑ixi⋅wi\text{sum} = \sum_i x_i \cdot w_i
  • Applies a step function:

    • If sum >= 0, output is 1

    • Else, output is 0

  • This determines the predicted class (0 or 1).


🧠 train Method

public void train(double[] inputs, int desired)

This is where learning happens:

  1. Add the bias input (1.0) to the input vector.

  2. Call activate() to get the prediction (guess).

  3. Compute error: desired - guess.

  4. If there’s an error, update each weight:

    weights[i] += learnc * error * input[i];
    

This is the Perceptron Learning Rule:

  • Move the weights slightly in a direction that reduces error.


🧾 printWeights Method

public void printWeights()
  • Prints the current weights (for inspection/debugging/training analysis).


✅ Summary of Workflow:

  1. Initialize with random weights.

  2. Train on labeled data using train().

  3. Use activate() to predict new inputs.

  4. Call printWeights() to see how the model evolved.


🧪 Example Use Case

Train a Perceptron to learn the AND gate:

double[][] data = {
    {0, 0}, {0, 1}, {1, 0}, {1, 1}
};
int[] labels = {0, 0, 0, 1}; // AND logic

Perceptron p = new Perceptron(2, 0.1);
for (int epoch = 0; epoch < 100; epoch++) {
    for (int i = 0; i < data.length; i++) {
        p.train(data[i], labels[i]);
    }
}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值