Multi-label learning for BP

该博客介绍了一个基于Python实现的多标签BP神经网络(MLBP)模型。通过设置隐藏神经元数量、激活函数类型等参数,训练并测试模型。模型用于处理具有多个可能类别的数据,通过Backpropagation训练,最终计算并输出评估指标,如平均精度、排名损失、汉明损失、覆盖率等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

# -*- coding: utf-8 -*-
"""
Created on 2017/4/5 9:52 2017

@author: Randolph.Lee
"""
from __future__ import division
from pybrain.structure import *
from Evaluation_metrics import *
from Threshold_function import get_threshold
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
import scipy.io as scio
import numpy as np


class MLBP:
    def __init__(self, hidden_neuron, alpha=0.05, epochs=100, in_type=2, out_type=2, cost=0.1, min_max=None):
        """
        :param hidden_neuron: Number of hidden neurons used in the network
        :param alpha: Learning rate for updating weights and biases, default=0.05
        :param epochs: Maximum number of training epochs, default=100
        :param in_type: The type of activation function used for the hidden neurons, 1 for 'logsig', 2 for 'tansig'
        :param out_type: The type of activation function used for the output neurons, 1 for 'logsig', 2 for 'tansig'
        :param cost: Cost parameter used for regularization, default=0.1
        :param min_max: min_max for data standardization
        """
        self.hidden_neuron = hidden_neuron
        self.alpha = alpha
        self.epochs = epochs
        self.in_type = in_type
        self.out_type = out_type
        self.cost = cost
        self.min_max = min_max
        self.output = None
        self.hamming_loss = 0.0
        self.ranking_loss = 0.0
   
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值