[IEEE TMI 2024]A Multi-Graph Cross-Attention-Based Region-Aware Feature Fusion Network Using Multi-

论文网址:A Multi-Graph Cross-Attention-Based Region-Aware Feature Fusion Network Using Multi-Template for Brain Disorder Diagnosis | IEEE Journals & Magazine | IEEE Xplore

论文代码:https://github.com/mylbuaa/MGCA-RAFFNet

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Methods

2.3.1. Multi-Template Parcellation

2.3.2. Multi-Graph Cross-Attention Network

2.3.3. Region-Aware Feature Fusion Network

2.4. Experiments and Results

2.4.1. Materials and Preprocessing

2.4.2. Experimental Settings and Evaluation

2.4.3. Overall Performance

2.5. Discussion

2.5.1. Ablation Study

2.5.2. Computational Complexity

2.5.3. Comparison With the State-of-the-Art Methods

2.5.4. Impact of the Hyperparameters of MGCAN

2.5.5. Impact of Hyperparameters Setting

2.5.6. Most Discriminative Brain Regions

2.5.7. Advantages and Limitations

2.6. Conclusion

1. 心得

(1)学校网怎么连着炸几天了

(2)这作者数学表达能力真的应该用魔丸来形容(),有一种混沌邪恶的感觉

(3)所有的多模板fMRI都长得好像...

2. 论文逐段精读

2.1. Abstract

        ①Limitations in current works: single template

2.2. Introduction

        ①哈哈哈,大脑疾病“with devastating efects not only on the individual but also the society”好可爱

        ②Existing templet fusion works failed to capture the deep relationship between templets

2.3. Methods

        ①Overall framework:

2.3.1. Multi-Template Parcellation

        ①Atlas chosing: AAL (anatomical), BN (anatomical and connectivity) and Power (hierarchical)

2.3.2. Multi-Graph Cross-Attention Network

(1)MGCAN

        ○The schematic of MGCAN:

        ①\boldsymbol{X}^{\boldsymbol{z}}\in\mathbb{R}^{N_z\times T}\left(z=1,2,\cdots,Z\right) denotes time series of the templet z, where N_z is the number of ROIs and T denotes time point

        ②x_n^{z,m}(t)=[x_n^{z,m}(1),x_n^{z,m}(2),\cdots,x_n^{z,m}(T)]^{\prime} denotes the time series of the m-th subject, n-th ROI and ' denotes transpose operation(这里的定义好混乱...)

        ③x_n^{z,m} is the combination of current and previous time series of other ROIs:

x_n^{z,m}(t)=\sum_{q=0}^QX_n^{z,m}(t-q)\theta_n^{z,m}(q)+e_n^{z,m}(t)

where Q is the order of the time delay and it is set to 1 for fair comparison

        ④And 

X_{n}^{z,m}(t-q)=[\mathbf{x}_{1}^{z,m}(t-q),\cdots,\mathbf{x}_{n-1}^{z,m}(t-q),\mathbf{x}_{n+1}^{z,m}(t-q),\cdots,\mathbf{x}_{N_{z}}^{z,m}(t-q)]\in\mathbb{R}^{T\times(N_{z}-1)}

\theta_{n}^{z,m}(q)=[\theta_{1,n}^{z,m}(q),\cdots,\theta_{n-1,n}^{z,m}(q),\theta_{n+1,n}^{z,m}(q),\cdots,\theta_{N_{z},n}^{z,m}(q)]^{\prime}\in\mathbb{R}^{(N_{z}-1)\times1}

and e_n^{z,m}(t) is a residual term

        ④Then the equation can be:

x_n^{z,m}(t)=X_n^{z,m}\theta_n^{z,m}+e_n^{z,m}(t)

where \boldsymbol{X}_{n}^{z,m}=[\boldsymbol{X}_{n}^{z,m}(t),\boldsymbol{X}_{n}^{z,m}(t-1),\cdots,\boldsymbol{X}_{n}^{z,m}(t-Q)]\in\mathbb{R}^{T\times((N_{z}-1)\times(Q+1))}\theta_{n}^{z,m}=[\theta_{n}^{z,m}(0)^{\prime},\theta_{n}^{z,m}(1)^{\prime},\cdots,\theta_{n}^{z,m}(Q)^{\prime}]^{\prime}\in\mathbb{R}^{((N_{z}-1)\times(Q+1))\times1}(公式这么长,我感觉就是全连接的图神经网络?但是没有自循环,相当于一个节点把上个时间步(或者上个+上上个时间步)所有其他节点特征全部拿过来了)

        ⑤Removing spurious connections:

\boldsymbol{\Theta}_n^z=argmin\sum_{m=1}^M||\boldsymbol{x}_n^{z,m}(t)-\boldsymbol{X}_n^{z,m}\boldsymbol{\theta}_n^{z,m}||_2^2+\lambda||\boldsymbol{\Theta}_n^z||_{2,1}

where \boldsymbol{\Theta}_n^z=[\boldsymbol{\theta}_n^{z,1},\boldsymbol{\theta}_n^{z,2},\cdots,\boldsymbol{\theta}_n^{z,M}]\in\mathbb{R}^{((N_z-1)\times(Q+1))\times M} is coefficient matrix set

        ⑥For time series matrix X^z\in\mathbb{R}^{N_z\times T} and its corresponding static adjacency matrix A_{sf}^z\in\mathbb{R}^{N_z\times N_z}, the GCN can be:

H_{sf}^{z}=\sigma(D_{sf}^{-1}A_{sf}\sigma(X^{z}W_{s1}^{z})W_{s2}^{z})

where D_{sf}^{z} denotes degree matrix, \boldsymbol{W}_{s1}^z\in\mathbb{R}^{T\times P},\boldsymbol{W}_{s2}^z\in\mathbb{R}^{P\times B} denotes weight matrix, \sigma is ELU(啊,魔改了的GCN吗)

        ⑦Sliding windows to get h dynamic series with length of \frac{T}{h}. 然后对每个滑窗进行时间卷积Blabla看图吧那一块很明确了。最后的输出是:

H_{df}^z=concat(H_{df}^{z,1},H_{df}^{z,2},\cdots,H_{df}^{z,h}),\boldsymbol{H}_{df}^z\in\mathbb{R}^{N_z\times B}

(2)DVCA

        ①Schematic of dual-view cross-attention (DVCA):

        ②Positional encoding:

\tilde{\boldsymbol{H}}_{sf}^{z}=LN(\boldsymbol{H}_{sf}^z+\boldsymbol{E}_{sf-pos}^z)\\\tilde{\boldsymbol{H}}_{df}^{z}=LN(\boldsymbol{H}_{df}^z+\boldsymbol{E}_{df-pos}^z)

where E_{sf-pos}^z,E_{df-pos}^z\in\mathbb{R}^{N_z\times B} denotes absolute positional embedding of static and dynamic sequence

        ③后面的自己看图,已经很明确了

2.3.3. Region-Aware Feature Fusion Network

        ①Schematic of RAFFNet:

对于不同脑模板提取到的特征,把每个八个脑区的ROI都分出来拼在一起,可能形状不一样

        ②The schematic of region-aware controller:

        ③The region aware attention is calculated by:
\boldsymbol{F}_g^{head}=\left(softmax\left(\frac{QK^{\prime}}{\sqrt{d_k}}\right)+A_R\right)\boldsymbol{V}

2.4. Experiments and Results

2.4.1. Materials and Preprocessing

        ①Dataset: ADNI-2 and ABIDE

2.4.2. Experimental Settings and Evaluation

        ①Batch size: 16

        ②Learning rate: 0.001 with Adam optimizer

        ③Cross validation: 10 fold

2.4.3. Overall Performance

        ①Comparison performance on ADNI-2:

        ②Comparison performance on ABIDE I dataset:

2.5. Discussion

2.5.1. Ablation Study

        ①Module ablation on ADNI 2 and ABIDE I:

        ②t-SNE visualization on (a) AAL only, (b) AAL+BN, (c) AAL+BN+Power:

        ③Region aware weight map ablation:

2.5.2. Computational Complexity

        ①Computational cost:

2.5.3. Comparison With the State-of-the-Art Methods

        ①Comparison table:

2.5.4. Impact of the Hyperparameters of MGCAN

        ①Influence of hyperparameters:

2.5.5. Impact of Hyperparameters Setting

        ①Impact of (a) learning rate, (b) batch size, (c) epoch:

2.5.6. Most Discriminative Brain Regions

        ①Significant brain regions:

        ②Significant brain connectivities:

        ③Table:

2.5.7. Advantages and Limitations

        ①Limitations: robustness

2.6. Conclusion

        ~

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值