Knowledge Distillation of Large Language Models

MINILLM是一种新的知识蒸馏方法,用于从大型语言模型(LLM)中提取知识,以训练更小的模型。它通过反向Kullback-Leibler散度(KLD)来优化,避免学生模型高估教师分布的低概率区域。MINILLM通过策略梯度优化、教师混合采样和长度归一化来提高训练效果,适用于不同大小的模型。实验表明,MINILLM在指令跟随任务中优于标准KD方法,具有更低的曝光偏差、更好的校准和长文本生成性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这是大模型系列模型的文章,针对《Knowledge Distillation of Large Language Models》的翻译。

摘要

知识蒸馏(KD)是一种很有前途的技术,可以减少大型语言模型(LLM)的高计算需求。然而,以前的KD方法主要应用于白盒分类模型或训练小模型来模仿像ChatGPT这样的黑盒模型API。如何有效地从白盒生成LLM中提取知识仍有待探索,随着LLM的蓬勃发展,这一点变得越来越重要。在这项工作中,我们提出了MINILLM,它从生成的较大语言模型中提取较小的语言模型。我们首先将标准KD方法中的前向Kullback-Leibler散度(KLD)目标替换为更适合生成语言模型上的KD的反向KLD,以防止学生模型高估教师分布的低概率区域。然后,我们推导出一种有效的优化方法来学习这个目标。在指令跟随设置中的大量实验表明,MINILLM模型生成更精确的响应,具有更高的整体质量、更低的曝光偏差、更好的校准和更高的长文本生成性能。我们的方法也适用于具有120M到13B参数的不同模型族。我们将在https://aka.ms/MiniLLM发布我们的代码和模型检查点。

1 引言

在这里插入图片描述
随着大型语言模型(LLM;

### Distilled BiLSTM Model Architecture and Implementation In the context of natural language processing (NLP), a Distilled BiLSTM refers to an optimized version of Bidirectional Long Short-Term Memory networks that have been distilled or compressed from larger models into more efficient structures while retaining performance capabilities[^1]. The primary goal is to leverage knowledge distillation techniques where smaller student models learn from predictions made by large teacher models. #### Key Components of Distilled BiLSTM The architecture typically consists of: - **Bidirectional LSTM Layers**: These layers process input sequences both forward and backward, capturing dependencies in either direction. - **Knowledge Distillation Mechanism**: This involves training a compact model using soft targets generated by a pre-trained complex network rather than hard labels alone. Soft targets provide additional information about class similarities which helps improve generalization ability. #### Implementation Details For implementing such architectures efficiently within NLP tasks like semantic parsing or other compositionally challenging benchmarks, one can adopt frameworks supporting recurrent neural networks along with custom loss functions tailored towards distilling processes. Here’s how it might look programmatically when utilizing TensorFlow/Keras API: ```python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Embedding, Bidirectional, LSTM, Dense def build_distilled_bilstm(vocab_size, embedding_dim, hidden_units): model = Sequential([ Embedding(input_dim=vocab_size, output_dim=embedding_dim), Bidirectional(LSTM(units=hidden_units)), Dense(hidden_units, activation='relu'), Dense(num_classes) # Assuming num_classes defined elsewhere ]) return model ``` This code snippet demonstrates constructing a basic yet effective structure suitable for various text-based applications requiring robust handling over sequential data patterns found inherently inside languages.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

UnknownBody

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值