bert系列第一篇: bert进行embedding

BERT被描述为一个有效的特征抽取器,基于Transformer的encoder结构进行预训练,尤其适用于生成句子的embedding。通过输入特殊标记[CLS],BERT能够捕获整个句子的含义,并在12层encoder后输出每个词的768维embedding。这些embedding可用于分类、句子向量构建、句子相似度计算等多种任务。理解并有效组合不同层的输出向量是关键。
AI助手已提取文章相关产品:

bert理解

一句话概括, bert就是一个抽取器。输入一句话(词序列),输出抽取后的embedding序列。
再简单理解就是,它就是一个 encoder。

简单机理

我们可以用transformer在语言模型上做预训练。因为transformer是encoder-decoder结构,语言模型就只需要encoder部分就够了。BERT,利用transformer的encoder来进行预训练。

那么什么是transformer?
这是一个新的训练结构,发展历程而言就是CNN,RNN,transformer; transformer是基于attention机理发展而来。
在这里插入图片描述
transformer由编码器和解码器组成。编码器和解码器都是基于attention机制。如下图

在这里插入图片描述

什么是注意力机制,一图简单领会,后面我们单独开一篇动手实践一下
在这里插入图片描述
注意力机制就是,当前词的含义,必须结合结合上下文才能更好的理解。

encoder输入输出

在这里插入图片描述

  • 输入会加入特殊的[CLS]代表整句话的含义,可以用于分类。

  • input的词help,prince,mayuko等,一共512,这是截取的最大长度。

  • 然后经过12层的encoder

  • 最后输出的是每个token对应的embedding序列,每个token对应一个768维的向量。这个应该很好理解。

输出的结果

out = bert(xx)

 Return:
        :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
        **last_hidden_state** (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):
            Sequence of hidden-states at the output of the last layer of the model.
        **pooler_output** (:obj:`torch.FloatTensor`: of shape :obj:`(batch_size, hidden_size)`):
            Last layer hidden-state of the first token of the sequence (classification token)
            further processed by a Linear layer and a Tanh activation function. The Linear
            layer weights are trained from the next sentence prediction (classification)
            objective during pre-training.

            This output is usually *not* a good summary
            of the semantic content of the input, you're often better with averaging or pooling
            the sequence of hidden-states for the whole input sequence.
        **hidden_states** (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_hidden_states=True``):
            Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
            of shape :obj:`(batch_size, sequence_length, hidden_size)`.

            Hidden-states of the model at the output of each layer plus the initial embedding outputs.
        **attentions** (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_attentions=True``):
            Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
            :obj:`(batch_size, num_heads, sequence_length, sequence_length)`.

            Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
            heads.

作用

有了词序列对应的embedding向量,就可以对词分类、句子向量构建,句子分类、句子相似度比较等。
在这里插入图片描述

code(notebook)

#%% md

# bert

#%%

!pip install transformers

#%%

import torch
from transformers import BertModel, BertTokenizer

#%%

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

#%%

input_ids = tokenizer.encode('hello world bert!')
input_ids

#%%

type(input_ids)

#%%

ids = torch.LongTensor(input_ids)
ids

#%%

text = tokenizer.convert_ids_to_tokens(input_ids)
text

#%%

model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
# Set the device to GPU (cuda) if available, otherwise stick with CPU
device = 'cuda' if torch.cuda.is_available() else 'cpu'

model = model.to(device)
ids = ids.to(device)

model.eval()

#%%

print(ids.size())
# unsqueeze IDs to get batch size of 1 as added dimension
granola_ids = ids.unsqueeze(0)
print(granola_ids.size())


#%% md

In the example below, an additional argument has been given to the model initialisation. output_hidden_states will give us more output information. By default, a BertModel will return a tuple but the contents of that tuple differ depending on the configuration of the model. When passing output_hidden_states=True, the tuple will contain (in order; shape in brackets):

1. the last hidden state (batch_size, sequence_length, hidden_size)
1. the pooler_output of the classification token (batch_size, hidden_size)
1. the hidden_states of the outputs of the model at each layer and the initial embedding outputs (batch_size, sequence_length, hidden_size)

#%%

out = model(input_ids=granola_ids) # tuple

hidden_states = out[2]
print("last hidden state:",out[0].shape) #torch.Size([1, 6, 768])
print("pooler_output of classification token:",out[1].shape)#[1,768] cls
print("all hidden_states:", len(out[2]))

#%%

for i, each_layer in enumerate(hidden_states):
    print('layer=',i, each_layer)

#%%

sentence_embedding = torch.mean(hidden_states[-1], dim=1).squeeze()
print(sentence_embedding)
print(sentence_embedding.size())

#%%


# get last four layers
last_four_layers = [hidden_states[i] for i in (-1, -2, -3, -4)]
# cast layers to a tuple and concatenate over the last dimension
cat_hidden_states = torch.cat(tuple(last_four_layers), dim=-1)
print(cat_hidden_states.size())

# take the mean of the concatenated vector over the token dimension
cat_sentence_embedding = torch.mean(cat_hidden_states, dim=1).squeeze()
print(cat_sentence_embedding)
print(cat_sentence_embedding.size())



不同的emebdding组合会带来不一样的结果,参考。
在这里插入图片描述
利用concat的向量,最优结果。

总结

  1. 不同的层代表不同的特征含义,向量组合的实验可以证明这一点。
  2. bert就是抽取器
  3. 不同隐层输出的向量的使用是核心所在
  4. 仔细理解文中的两幅图,和样例代码。然后就是感悟了!

引用

  1. https://github.com/huggingface/transformers/issues/2986
  2. https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb
  3. https://www.cnblogs.com/gczr/p/11785930.html
  4. https://blog.youkuaiyun.com/longxinchen_ml/article/details/86533005

您可能感兴趣的与本文相关内容

Here is the R code to perform the statistical analyses you described on the `tobacco.csv` dataset: ```R # Load the dataset tobacco_data <- read.csv("tobacco.csv") # Subset the data for F1 and F2 generations F1_data <- subset(tobacco_data, Generation == "F1") F2_data <- subset(tobacco_data, Generation == "F2") # Calculate 95% confidence interval for the mean of leaf length in F2 generation F2_leaf_length <- F2_data$LeafLength F2_mean_ci <- t.test(F2_leaf_length, conf.level = 0.95)$conf.int print("95% Confidence interval for F2 mean leaf length:") print(F2_mean_ci) # Test if F1 average leaf length < 70mm at 0.01 significance level F1_leaf_length <- F1_data$LeafLength F1_test_70 <- t.test(F1_leaf_length, mu = 70, alternative = "less", conf.level = 0.99) print("Test if F1 average leaf length < 70mm at 0.01 significance level:") print(F1_test_70) # Calculate 95% confidence interval for the variance of leaf length in F2 generation library(stats) n <- length(F2_leaf_length) s2 <- var(F2_leaf_length) chi_sq_lower <- qchisq(0.025, df = n - 1) chi_sq_upper <- qchisq(0.975, df = n - 1) F2_var_ci <- c((n - 1) * s2 / chi_sq_upper, (n - 1) * s2 / chi_sq_lower) print("95% Confidence interval for F2 leaf length variance:") print(F2_var_ci) # Estimate and 95% confidence interval for proportion of F2 samples with leaf length > 80mm F2_gt_80 <- sum(F2_leaf_length > 80) n_F2 <- length(F2_leaf_length) p_hat <- F2_gt_80 / n_F2 se <- sqrt(p_hat * (1 - p_hat) / n_F2) z <- qnorm(0.975) prop_ci <- c(p_hat - z * se, p_hat + z * se) print("Estimate and 95% confidence interval for proportion of F2 samples with leaf length > 80mm:") print(p_hat) print(prop_ci) # Test if F1 mean leaf length = 63mm at 0.01 significance level F1_test_63 <- t.test(F1_leaf_length, mu = 63, conf.level = 0.99) print("Test if F1 mean leaf length = 63mm at 0.01 significance level:") print(F1_test_63) ``` ### Explanation: 1. **Data Loading and Subsetting**: The `tobacco.csv` dataset is loaded, and then subsets are created for the F1 and F2 generations. 2. **95% Confidence Interval for F2 Mean Leaf Length**: The `t.test` function is used to calculate the 95% confidence interval for the mean leaf length in the F2 generation. 3. **Test if F1 Average Leaf Length < 70mm at 0.01 Significance Level**: The `t.test` function is used with the `alternative = "less"` option to perform the one - sided t - test. 4. **95% Confidence Interval for F2 Leaf Length Variance**: The chi - square distribution is used to calculate the confidence interval for the variance. 5. **Estimate and 95% Confidence Interval for Proportion of F2 Samples with Leaf Length > 80mm**: The sample proportion is calculated, and then the normal approximation is used to calculate the confidence interval. 6. **Test if F1 Mean Leaf Length = 63mm at 0.01 Significance Level**: The `t.test` function is used to perform the two - sided t - test.
评论 2
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值