【动手学transformer】源码阅读之sparse_embedding

稀疏嵌入(sparse embeddings)是一个计算稀疏嵌入的方法,通过输入的隐藏状态和 token ID 生成稀疏嵌入,并对未使用的 token 进行处理,以确保它们不会影响模型的后续操作。
以下代码是一个 PyTorch 方法的实现:

    def sparse_embedding(self, hidden_state, input_ids, return_embedding: bool = True):
        token_weights = torch.relu(self.sparse_linear(hidden_state))
        if not return_embedding: return token_weights

        sparse_embedding = torch.zeros(input_ids.size(0), input_ids.size(1), self.vocab_size,
                                       dtype=token_weights.dtype,
                                       device=token_weights.
我已经执行过add节点修复代码了 但是根据输出信息来看检查Add节点: sequential/transformer_encoder/positional_embedding/add ❌ Add节点 sequential/transformer_encoder/positional_embedding/add 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/multi_head_attention/value/add ❌ Add节点 sequential/transformer_encoder/multi_head_attention/value/add 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/multi_head_attention/query/add ❌ Add节点 sequential/transformer_encoder/multi_head_attention/query/add 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/multi_head_attention/key/add ❌ Add节点 sequential/transformer_encoder/multi_head_attention/key/add 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/multi_head_attention/attention_output/add;sequential/transformer_encoder/multi_head_attention/attention_output/einsum/Einsum;sequential/transformer_encoder/multi_head_attention/attention_output/add/ReadVariableOp ❌ Add节点 sequential/transformer_encoder/multi_head_attention/attention_output/add;sequential/transformer_encoder/multi_head_attention/attention_output/einsum/Einsum;sequential/transformer_encoder/multi_head_attention/attention_output/add/ReadVariableOp 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/add 输入1: sequential/transformer_encoder/positional_embedding/add 形状: [0, 40, 1536] 输入2: sequential/transformer_encoder/multi_head_attention/attention_output/add;sequential/transformer_encoder/multi_head_attention/attention_output/einsum/Einsum;sequential/transformer_encoder/multi_head_attention/attention_output/add/ReadVariableOp 形状: [0, 40, 1536] 检查Add节点: sequential/transformer_encoder/layer_normalization/FusedBatchNormV37 ❌ Add节点 sequential/transformer_encoder/layer_normalization/FusedBatchNormV37 无法获取输入张量信息 检查Add节点: sequential/transformer_encoder/layer_normalization/FusedBatchNormV313 输入1: sequential/transformer_encoder/layer_normalization/FusedBatchNormV310 形状: [1, 1536, 1, 0] 输入2: sequential/transformer_encoder/layer_normalization/FusedBatchNormV312 形状: [0] ⚠️ Add节点 sequential/transformer_encoder/layer_normalization/FusedBatchNormV313 输入形状不匹配: [1, 1536, 1, 0] vs [0] 🔧 已为Add节点 sequential/transformer_encoder/layer_normalization/FusedBatchNormV313 添加Reshape操作,目标形状: [1, 1536, 1, 0] 检查Add节点: sequential/transformer_encoder/layer_normalization/add ❌ Add节点 sequential/transformer_encoder/layer_normalization/add 无法获取输入张量信息 检查Add节点: Add__6 ❌ Add节点 Add__6 无法获取输入张量信息 ✅ 已保存修复后的模型到 fixed_transformer.onnx 并没有秀谷全部节点
08-10
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

infiniteWei

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值