Causal Attention for Vision-Language Tasks Paper: Causal Attention for Vision-Language Tasks个人理解

Causal Attention for Vision-Language Tasks Paper: Causal Attention for Vision-Language Tasks

传统的视觉语言任务中,如果数据集是长尾分布的,attention机制更加关注head的信息,如果问及long-tail的问题会得到错误的答案。

符号含义
C共识
X输入特征
Zattention从X中学到的信息
M目标识别从X中提取的实体集
Y输出标签

因果图:

C
X
M
Y
Z

C->M与C->X:X的特征提取需要依据共识,M是在共识的前提下从X中提取的
X->M与X->Z:M是在共识的前提下从X中提取的,Z是attention从X中学到的信息
M->Y与Z->Y:视觉语言任务中,Y是根据Z进行分类然后从M中的实体选择的

核心:运用前门调整公式
P ( Y ∣ d o ( X ) ) = ∑ z ∈ Z P ( Z = z ∣ X ) ∑ x ∈ X P ( X = x ) P ( Y ∣ Z = z , X = x ) P(Y|do(X))=\displaystyle \sum_{z \in Z}P(Z=z|X)\sum_{x \in X}P(X=x)P(Y|Z=z,X=x) P(Ydo(X))=zZP(Z=zX)xXP(X=x)P(YZ=z,X=x)
即同时运用全概率公式:
P ( Y ∣ X ) = ∑ z ∈ Z P ( Z = z ∣ X ) P ( Y ∣ Z = z ) P(Y|X)=\displaystyle \sum_{z \in Z}P(Z=z|X)P(Y|Z=z) P(YX)=zZP(Z=zX)P(YZ=z)
与后门调整公式:
P ( Y ∣ d o ( Z ) ) = ∑ x ∈ X P ( X = x ) P ( Y ∣ Z , X = x ) P(Y|do(Z))=\displaystyle \sum_{x \in X}P(X=x)P(Y|Z,X=x) P(Ydo(Z))=xXP(X=x)P(YZ,X=x)
其中 ∑ z P ( Z = z ∣ X ) \sum_{z}P(Z=z|X) zP(Z=zX)可以用In-Sample Sampling来去模拟,即从当前样本中学习信息; ∑ x P ( X = x ) \sum_{x}P(X=x) xP(X=x)可以用Cross-Sample Sampling来去模拟,这里的x不同于In-Sample Sampling中的x,因此是跨样本的信息采集。
采集到Z和X的信息之后,可以利用神经网络g来计算
P ( Y ∣ Z , X = x ) = S o f t m a x ( g ( X , Z ) ) P(Y|Z,X=x)=Softmax(g(X,Z)) P(YZ,X=x)=Softmax(g(X,Z))
然后运用NWGM再把前面的两个采样方法加进来,把采样的过程转化为调整embedding的过程
因为 E x [ y ( x ) ] = ∑ x y ( x ) P ( x ) \mathbb{E}_x[y(x)]=\sum_x y(x)P(x) Ex[y(x)]=xy(x)P(x),而 W G M ( y ( x ) ) = ∏ x y ( x ) P ( x ) WGM(y(x))=\prod_x y(x)^{P(x)} WGM(y(x))=xy(x)P(x),前面是算数平均,后面是几何平均,如果X数量比较大那么二者是相当接近的,因此可以有
E x [ y ( x ) ] ≈ W G M ( y ( x ) ) \mathbb{E}_x[y(x)] \approx WGM(y(x)) Ex[y(x)]WGM(y(x))
y ( x ) = e g ( x ) y(x)=e^{g(x)} y(x)=eg(x)的前提下有:
W G M ( y ( x ) ) = ∏ x y ( x ) P ( x ) = ∏ x e g ( x ) P ( x ) = ∏ x e g ( x ) P ( x ) = e ∑ x g ( x ) P ( x ) = e ∑ x E x ( g ( x ) ) \begin{aligned} WGM(y(x))&=\prod_x y(x)^{P(x)}\\ &=\prod_x {e^{g(x)}}^{P(x)}\\ &=\prod_x e^{g(x)P(x)}\\ &=e^{\displaystyle \sum_x g(x)P(x)}\\ &=e^{\displaystyle \sum_x \mathbb{E}_x(g(x))} \end{aligned} WGM(y(x))=xy(x)P(x)=xeg(x)P(x)=xeg(x)P(x)=exg(x)P(x)=exEx(g(x))
因此,有 E x [ y ( x ) ] ≈ W G M ( y ( x ) ) = e ∑ x E x ( g ( x ) ) \mathbb{E}_x[y(x)] \approx WGM(y(x))=e^{\sum_x \mathbb{E}_x(g(x))} Ex[y(x)]WGM(y(x))=exEx(g(x))
带入 P ( Y ∣ d o ( X ) ) P(Y|do(X)) P(Ydo(X))
P ( Y ∣ d o ( X ) ) = ∑ z ∈ Z P ( Z = z ∣ X ) ∑ x ∈ X P ( X = x ) P ( Y ∣ Z = z , X = x ) = E [ Z ∣ X ] E [ X ] [ P ( Y ∣ Z , X ) ] ≈ W G M ( P ( Y ∣ Z , X = x ) ) ≈ e g ( E [ Z ∣ X ] [ Z ] , E [ X ] [ X ] ) ≈ S o f t m a x [ g ( X ^ , Z ^ ) ] \begin{aligned} P(Y|do(X))&=\displaystyle \sum_{z \in Z}P(Z=z|X)\sum_{x \in X}P(X=x)P(Y|Z=z,X=x)\\ &=\mathbb{E}_{[Z|X]}\mathbb{E}_{[X]}[P(Y|Z,X)]\\ &\approx WGM(P(Y|Z,X=x))\\ &\approx e^{g(\mathbb{E}_{[Z|X]}[Z],\mathbb{E}_{[X]}[X])}\\ &\approx Softmax[g(\hat{X},\hat{Z})] \end{aligned} P(Ydo(X))=zZP(Z=zX)xXP(X=x)P(YZ=z,X=x)=E[ZX]E[X][P(YZ,X)]WGM(P(YZ,X=x))eg(E[ZX][Z],E[X][X])Softmax[g(X^,Z^)]
在前面根据神经网络有 P ( Y ∣ Z , X = x ) = S o f t m a x ( g ( X , Z ) ) ≈ e g ( X , Z ) P(Y|Z,X=x)=Softmax(g(X,Z)) \approx e^{g(X,Z)} P(YZ,X=x)=Softmax(g(X,Z))eg(X,Z),这不满足 P ( Y ∣ Z , X = x ) = e g ( X , Z ) P(Y|Z,X=x)=e^{g(X,Z)} P(YZ,X=x)=eg(X,Z)的前提,因此WGM后还需要近似。
最后softmax的目的是为了让所有概率加起来总和为1

其中:
Z ^ = ∑ z ∈ Z P ( Z = z ∣ h ( X ) ) z ≈ V I S o f t m a x ( Q I T K I ) \begin{aligned} \hat{Z}&=\displaystyle \sum_{z \in Z}P(Z=z|h(X))z\\ &\approx V_I Softmax({Q_I}^T K_I) \end{aligned} Z^=zZP(Z=zh(X))zVISoftmax(QITKI)
X ^ = ∑ x ∈ X P ( X = x ∣ f ( X ) ) x ≈ V C S o f t m a x ( Q C T K C ) \begin{aligned} \hat{X}&=\displaystyle \sum_{x \in X}P(X=x|f(X))x\\ &\approx V_C Softmax({Q_C}^T K_C) \end{aligned} X^=xXP(X=xf(X))xVCSoftmax(QCTKC)
CATT可以放在BERT架构或者其他Transformer模型的深度神经网络之前,很容易使用。

### Linear Complexity Self-Attention Implementation and Optimization Self-attention mechanisms have been pivotal in advancing the capabilities of deep learning models, especially within natural language processing tasks. Traditional self-attention has a quadratic time complexity relative to input length due to its computation involving all pairs of positions in an input sequence[^1]. However, linear complexity self-attention aims at reducing this computational burden. #### Efficient Implementations One approach towards achieving linear complexity involves approximating or restructuring how attentions scores are computed between tokens. For instance, instead of computing full pairwise interactions, one could use locality-sensitive hashing (LSH), which groups similar items into buckets without explicitly comparing every item against each other. This method significantly reduces the number of required comparisons while maintaining performance quality[^3]. Another technique utilizes random projections where high-dimensional vectors representing token embeddings get projected onto lower dimensions through structured matrices like Fastfood transforms. Such transformations preserve distances well enough so that subsequent operations remain effective yet require fewer resources than standard methods do[^4]. ```python import torch from performer_pytorch import PerformerLM model = PerformerLM( num_tokens=20000, dim=512, depth=6, heads=8, causal=True, feature_redraw_interval=1000, generalized_attention=True, kernel_fn='relu' ) text = "The quick brown fox jumps over the lazy dog" tokens = tokenizer.encode(text).ids # assuming you've defined `tokenizer` elsewhere input_tensor = torch.tensor([tokens]) output = model(input_tensor) print(output.shape) # should output something like torch.Size([1, seq_len, vocab_size]) ``` This code snippet demonstrates implementing efficient self-attention via the Performer architecture from PyTorch library, leveraging fast Fourier transform-based kernels for reduced complexity computations during training phases. #### Optimizations Techniques Optimizing these implementations often revolves around exploiting hardware acceleration features such as GPU tensor cores optimized specifically for matrix multiplications involved in attention calculations. Additionally, mixed precision arithmetic can further enhance speed by performing some parts of forward/backward passes using half-precision floating-point numbers when possible without sacrificing much accuracy. Memory efficiency gains come not only from algorithmic improvements but also architectural choices like chunked processing schemes dividing long sequences into smaller manageable chunks processed independently before being recombined later on. These strategies help mitigate memory overhead associated with large-scale transformer architectures operating under constrained environments[^2]. --related questions-- 1. How does Locality-Sensitive Hashing contribute to making self-attention computationally feasible? 2. What role do random projections play in optimizing self-attention algorithms? 3. Can you explain how specific hardware optimizations impact the performance of linear-complexity self-attention models? 4. In what ways might chunked processing improve both runtime and resource utilization compared to traditional approaches?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值