Shapley Explanation Networks

《ShapleyExplanation Networks》论文探讨了如何使用Shapley值来解决神经网络可解释性的挑战。Shapley值源自博弈论,用于评估特征重要性,但其计算复杂度高。文章指出现有方法仅能事后分析,无法直接改进模型。研究目标在于通过近似方法降低计算复杂度,并利用Shapley值指导网络正则化和剪枝,直接优化模型。

作者:18届 cyl

日期:2021-08-15

论文:《Shapley Explanation

Networks》

期刊:ICLR

一、前提知识:Shapley Value

Shapley Value起源于博弈论,n人合作,共创造了v(N)的价值,如何评估每个人的价值?通过计算每个
人的Shapley value。
公式:
在这里插入图片描述
在这里插入图片描述
特点

  1. 可以评估一组特征的重要性
  2. 计算复杂的是指数级别
  3. 可以进行线性变换,对一个函数求了shapley值,如果函数再进行线性变换,可以通过对原
    shapley值进行线性变换求新的shapley值。

二、背景

神经网络的可解释性是待解决的重要问题,其中了解每个特征的重要性是提高可解释性的一个方向。
在博弈论中提出的Shapley值是评估组成元素重要性的一个方法,并且shapley值已经被应用到了机器学
习领域,但是存在两个问题:
(1)由于Shapley值计算方法本身的特点,它的时间复杂度是指数级别的;
(2)目前的分析方法都是“事后”的,即模型已经构建完成了,对模型计算shapley值,这样一来消耗时

03-26
### SHAP Library in Machine Learning Explanation SHAP (SHapley Additive exPlanations) is a game-theoretic approach used to explain the output of any machine learning model by attributing each feature's contribution to the prediction. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. The SHAP library provides tools that allow users to compute these contributions efficiently for various types of models, including tree-based methods like XGBoost or LightGBM as well as deep neural networks. The key advantage of SHAP lies in its ability to provide consistent and locally accurate explanations while ensuring global consistency across all predictions made by the model[^1]. Below are some important aspects regarding how it works: #### Key Features - **Shapley Values**: These represent the average marginal contribution of one feature value over all possible coalitions when considering other features' combinations. - **Force Plots**: Visualizations showing individual instance-level impacts where positive/negative effects on predicted outcomes can be observed clearly via horizontal bar charts representing different input variables’ influence scores. - **Summary Plot**: Provides an overview plot summarizing which factors have significant importance within your dataset based upon absolute mean(|SHAP|). Here’s an example demonstrating usage with scikit-learn Random Forest Classifier: ```python import shap from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2) model = RandomForestClassifier() model.fit(X_train, y_train) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_test) # Displaying force plots for first test sample shap.initjs() shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X_test.iloc[0,:]) ``` This code snippet initializes JS rendering capabilities before plotting out interactive visual representations such as ForcePlot showcasing specific samples alongside overall trends through SummaryPlots etc., thus enhancing transparency into complex black-box systems effectively.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

中南大学苹果实验室

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值