[MIA 2025]Brain networks and intelligence: A graph neural network based approach to resting state fM

论文网址:Brain networks and intelligence: A graph neural network based approach to resting state fMRI data - ScienceDirect

论文代码:bishalth01/BrainRGIN: Extending from the existing graph convolution networks, our approach incorporates a clustering-based embedding and graph isomorphism network method in the graph convolutional layer to reflect nature of the brain sub-network organization and efficient expression, in combination with TopK pooling and attention-based readout functions.

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Theory and related work

2.3.1. Graph neural networks

2.3.2. Comparison with existing deep learning based architectures for brain network analysis

2.4. Proposed architecture

2.4.1. Functional connectivity graph creation

2.5. Experiments

2.5.1. Experimental setup

2.6. Results

2.6.1. Hyperparameter study

2.6.2. Interpretation of brain regions and networks

2.6.3. Experiment with HCP dataset

2.7. Discussion and conclusion

1. 心得

(1)难评,懂得都懂

(2)和我今天喝的茶屿一样。两个一起喷了

2. 论文逐段精读

2.1. Abstract

        ①They aim to predict intellegence

2.2. Introduction

        ①Types of intellegence:

fluid intelligence (FI)the ability to reason and solve problems in novel situations
crystallized intelligence (CI)the ability to use knowledge and experience to solve problems
total intelligence (TI)a composite measure of overall cognitive ability

        ②They proposed Brain ROI-aware Graph Isomorphism Networks (BrainRGIN) for cluster-specific learning

2.3. Theory and related work

2.3.1. Graph neural networks

        ①For graph G=\left ( V,E \right ) with node set V and edge set E, to learn a global graph representation to predict y_G=g\left(h_G\right)

        ②Introduced traditional GNNs (not for brains)

2.3.2. Comparison with existing deep learning based architectures for brain network analysis

        ①Lists relevant GNN research on fMRI 

2.4. Proposed architecture

        ①Overall framework of BrainRGIN:

(图画的太简单了,topk不是他提出的,SERO和GARO也不是他提出的)

2.4.1. Functional connectivity graph creation

        ①"regions selected based on an atlas lead to an FC matrix, while regions generated by independent component analyzes (ICA) lead to an FNC matrix"

        ②The adjacency matrix A_{k}\in R^{N\times N} is constructed by Pearson correlation:

A_{i,j}^k=\left\{ \begin{array} {ll}0, & i=j \\ e_{i,j}, & \mathrm{otherwise} \end{array}\right.

(1)RGIN convolution

        ①Message passing in RGIN (combined with RGCN and GIN, the aggregation function of RGCN replaces the function of GIN):

h_i^{(l)}=\mathrm{MLP}^{(l)}\left(\left(1+\epsilon^{(l)}\right)\cdot W_i^{(l)}\cdot h_i^{(l-1)}+\sum_{j\in N_{(i)}^{(l)}}W_j^{(l)}\cdot e_{i,j}^{(l-1)}\cdot h_j^{(l-1)}\right)

where W and \epsilon are learnable parameters, and W_i contains the positional information form r_i:

\mathbf{W}_i^{(l)}=\theta_2^{(l)}\cdot\mathrm{relu}\left(\theta_1^{(l)}\mathbf{r}_i\right)+\mathbf{b}^{(l)}

where r stores the one-hot positional encoding information

但作者觉得r存one hot是稀疏矩阵,其实很占空间,就想优化一下:

\alpha_k^{(l)}=\theta_1^{(l)}r_i

其中\alpha _i^{(l)}\in R^{K\left(l\right)},这样直接用可学习参数代替一个矩阵乘上另一个可学习参数。同时把二维的\theta_2拆分成集群数个K的基矩阵\beta_u^{(l)}\in R^{d(l+1)\times d(l)}。这样可以更新为:

W_i^{(l)}=\sum_{u=1}^{K(l)}\alpha_{iu}^{(l)}\beta_u^{(l)}+b^{(l)}

        ②Forward propagation function of RGIN Convolution:

h_{i}^{(l)}=\mathrm{MLP}^{(l)}\left(\left(1+\epsilon^{(l)}\right)\cdot\sum_{u=1}^{K(l)}\alpha_{iu}^{(l)}\beta_{u}^{(l)}+b^{(l)}\cdot h_{i}^{(l-1)}+\sum_{j\in N_{(i)}^{(l)}}h_{j}^{(l-1)}\cdot\sum_{u=1}^{K(l)}\alpha_{ju}^{(l)}\beta_{u}^{(l)}+b^{(l)}\cdot e_{i,j}^{(l-1)}\right)

(2)Pooling layers

        ①Top k selection:

s^{(l)}=\frac{H^{(l)}w^{(l)}}{\left\|w^{(l)}\right\|_2}

\tilde{s}^{(l)}=\frac{\left(s^{(l)}-\mu\left(s^{(l)}\right)\right)}{\sigma\left(s^{(l)}\right)}

i=\mathrm{topk} \begin{pmatrix} \tilde{s}^{(l)},k \end{pmatrix}

H^{\left(l+1\right)}=\left(H^{\left(l\right)}\mathrm{\odot ~sigmoid}{\left(\tilde{s}^{\left(l\right)}\right)}\right)_{i:}

E^{(l+1)}=E_{i,i}^{(l)}

(3)Readout

        ①都不是你自己的了干嘛非要又把GARO和SERO详细介绍一遍??没看过的人建议转移去STAGIN原文

(4)Loss functions

        ①SmoothL1Loss: compared with MSE, SmoothL1Loss is robuster and less sensitive:

L_{\mathrm{smoothL}1}\left(x,y\right)= \begin{array} {cc}0.5\cdot\left(x-y\right)^2, & \mathrm{if}|x-y|<1 \\ \left|x-y\right|-0.5, & \mathrm{otherwise} \end{array}

        ②Unit loss for learnable constraining vector:

L_{\mathrm{unit}}(v)=\left\|\omega\right\|_{2}-1

        ③Top k loss:

L_{\mathrm{TPK}}^{(l)}=-\frac{1}{M}\sum_{m=1}^{M}\frac{1}{N^{(l)}}\sum_{i=1}^{k}\log\left(\hat{s}_{m,i}\right)+\sum_{i=1}^{N^{(l)}-k}\log\left(1-\hat{s}_{m,i+k}\right)

        ④Total loss:

L_{\mathrm{total}}=L_{\mathrm{smoothL}1}+\sum_{l=1}^LL_{\mathrm{unit}}^{(l)}+\lambda_1\sum_{l=1}^LL_{\mathrm{TPK}}^{(l)}

2.5. Experiments

        ①Dataset: ABCD

        ②Data split: 5964 for tr, 1278 for val and 1278 for test

2.5.1. Experimental setup

        ①Node: 53 by NeuroMark ICA Components

        ②Node feature: number of pairwise correlations between each other nodes (i.e., 53)

        ③Pooling rate: 0.38 for FI, 0.46 for CI and 0.78 for TI

        ④Cluster number: 7 (because of Yeo 7)

        ⑤Model comparison tool: sergeyplis/polyssifier: run a multitude of classifiers on you data and get an AUC report

2.6. Results

        ①Readout ablation of BrainRGIN:

        ②Comparison table:

2.6.1. Hyperparameter study

        ①How top k influence performance:

2.6.2. Interpretation of brain regions and networks

        ①Top 2 ROI for FI and CI:

        ②Top 21 ROI for TI:

2.6.3. Experiment with HCP dataset

(1)HCP dataset

        ①Dataset: HCP S1200

(2)Hyperparameter settings for HCP

        ①Using different hyper-parameter in training on HCP dataset   

(3)Results from HCP dataset

        ①Performance:

(4)Study of BrainRGIN components

        ①Module ablation:

2.7. Discussion and conclusion

        ①还做了t检验,全面啊

        ②分析了脑区对不同智力的贡献:“对流体智力做出贡献的前两个区域是额中回和颞中回。在结晶智力的背景下,除了额中回外,尾状核起着最显着的贡献。被确定为与总综合分相关的不同大脑区域反映了认知过程的复杂性和分布性,以及 “智力 ”的一般和广泛属性。这些区域覆盖了六个相对独立的大脑网络。”

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值