理解《Deep Forest: Towards An Alternative to Deep Neural Network》

Deep Forest 优点


1. 性能高度接近深度神经网络
2. gcForest 的参数数量较少,训练更简单,森林层次自适应
3. 训练速度更快
4. gcForest能在小数据集上表现良好
5. 树结构在理论上更容易分析和理解

Deep Learning 缺点
1. 需要更大的训练数据才能有更好的结果
2. DNN结构复杂,需要的计算量大(为了要利用大的训练数据,学习模型需要更大的容量,会更复杂)
3. DNN有大量参数,且训练性能严重依赖于 hyper parametor 的微调

Deep Forest 结构


说明:本文中插图全部来自原论文 ”Deep Forest: Towards An Alternative to Deep Neural Network”
Deep Forest 的结构分为两个大部分,其中
第一个部分是Multi-Grained Scanning,作用类似于卷积神经网络,即将多个相邻特征进行分组处理,考虑了特征之间的相互关系,示例中将一个400维的特征扩展到了3618维。
二个部分是Cascade Forest 这部分是算法的核心,通过将若干个弱分类器(决策树)集成得到的森林再次集成,形成森林瀑布的层次,每一层中都由四个森林组成,最终结果层是取森林瀑布顶层的四个森林结果的均值作为最终判定结果。

Cascade Forest 整体算法如下:


Adversarial attacks are a major concern in the field of deep learning as they can cause misclassification and undermine the reliability of deep learning models. In recent years, researchers have proposed several techniques to improve the robustness of deep learning models against adversarial attacks. Here are some of the approaches: 1. Adversarial training: This involves generating adversarial examples during training and using them to augment the training data. This helps the model learn to be more robust to adversarial attacks. 2. Defensive distillation: This is a technique that involves training a second model to mimic the behavior of the original model. The second model is then used to make predictions, making it more difficult for an adversary to generate adversarial examples that can fool the model. 3. Feature squeezing: This involves converting the input data to a lower dimensionality, making it more difficult for an adversary to generate adversarial examples. 4. Gradient masking: This involves adding noise to the gradients during training to prevent an adversary from estimating the gradients accurately and generating adversarial examples. 5. Adversarial detection: This involves training a separate model to detect adversarial examples and reject them before they can be used to fool the main model. 6. Model compression: This involves reducing the complexity of the model, making it more difficult for an adversary to generate adversarial examples. In conclusion, improving the robustness of deep learning models against adversarial attacks is an active area of research. Researchers are continually developing new techniques and approaches to make deep learning models more resistant to adversarial attacks.
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值