Source: SPM How-tos
NOTE: The manual is detailed but too long to edit. Just pasted here for further consulting.
This web-page contains excerpts from the SPM user-group's burster. SPM users around the globe run into the same problems as you do- here are there questions, with answers from the experts! You can search the SPM burster archives (at the official SPM website) for a particular keyword, or peruse this web-page, which has the same items, only organized topically. You will find that this web-page adheres to the Socratic method, but you don't have to do all that walking to and fro. There are several general categories:
Menu Items: Helpful hints about how to use the various menus, hidden menu features, and various display options. Model Items: Examples of many common models, considerations for selecting a particular model, etc. This section very likely contains an example similar to what you want to do. Analysis Items: Discussion and examples of how to use and apply various aspects of analysis within SPM. Particularly helpful for beginners are the discussions about Contrasts and how they work. Concepts: Discussions of any of the above areas that are more theoretical and of a more general nature than many of the answers to specific user's problems. Data Acquisition Pointers: Hints about acquiring your data so it is useful in subsequent analysis steps. Discussion of drop-out artifact in fMRI. Click on an item in the outline below to go to a particular topic, or just dig in and start reading! (Items without hyperlinks in the outline are still pending...) Note that most of the topics (but not the individual discussions) are repeated in the several major categories, so if you want to find out about e.g. Contrasts, you can look in the "Menu Items" for how to enter a contrast, in "Analysis Items" for what contrast may be appropriate to a particular study, and in "Concepts" for a more detailed look at how contrasts work under the hood.
Many of the answers/responses have been edited slightly for brevity's sake. If you feel there was too much editing, you can easily search the archive and obtain the full text, since the responder and date are included after most of the entries.
Return to LfAN SPM resources page.
Outline:
- Interface items:
Selecting Image Data fMRI design with >16 sessions: picking scans Plotting variables: fMRI time-courses. Plot 'contrast of parameter estimates' Plot parametric responses Values from plots Masking Segmentation Contrast Manager View a large number of sessions Conjunction Analysis Pixel Coordinates for voxels in a cluster Global normalization mean global estimates threshold for Images Displaying t-statistic images Saving Printing print results to a text-file Img Calc hints Memory (RAM) management hints
- Model Items (Creating a design matrix)
1 subj compared to controls 1 group, 2 conditions 1 group, 2 conditions, 1 covariate 1 group, 2 conditions, 1 covariate, 2 nuisances 1 group, 2 conditions, 3 levels/condition 1 group, 4+ conditions 1 group, multi-factor design 2 groups, 2 conditions 2 groups, 2 conditions, 1 covariate 3 groups (2 patient, 1 control), 2 conditions Event-related
- Analysis Items
Realignment (a.k.a. Motion Correction) Spatial Normalization Talairach Coordinates Smoothing Covariates Contrasts F-contrasts explained Time x Condition A > (B & C) Conjunctions General With Small Volume Correction (SVC) Differences between SPM96, SPM99 Orthogonal Contrasts Random Effects General 2 groups, 2 conditions 3 conditions (A, B, Rest) 2 conditions, 1 covariate, 1 nuisance 4 conditions, 4 matched rest Variable event-related fMRI HRF (Hemodynamic Response Function) Extract fMRI time-course Slice Timing Interslice gap Small Volume Correction Hemisphere (L/R) effects Image Scaling Output Files
- Concepts
Spatial Normalization Smoothing Smoothness estimates Gaussian Field Theory Fixed vs. Random Effects Contrasts Confounds Eigenimages Power Analysis fMRI Time Modeling
- Data Acquisition Pointers
fMRI Susceptibility Artifacts
- Interface Items
Selecting Image Data
fMRI design with >16 sessions: picking scans
We are attempting an fmri analysis, using a design matrix with 16 sessions. When we run the estimation model, instead of asking us which scans are required for each individual session, it asks which scans are required for all sessions. Choosing all scans at the same time does not work. When we use 12 sessions or lower, we are asked which scans are required for each session (1 through 12), and the analysis runs correctly. Does anyone know why can we do this with 12 sessions and not 16?
This is my fault. The idea was that some designs could be viewed as a series of short sessions (e.g. burst-mode or sparse sampling). In this context it would be easier to select all sessions at once. The limit is 16 sessions. To change this, say to 32, change line 236 in spm_fmri_spm_ui.m from
if nsess < 16 to if nsess < 32 % get filenames %--------------------------------------------------------------- nsess = length(xX.iB); nscan = zeros(1,nsess); for i = 1:nsess nscan(i) = length(find(xX.X(:,xX.iB(i)))); end P = []; ****if nsess < 16 **** for i = 1:nsess str = sprintf('select scans for session %0.0f',i); if isempty(BCH) q = spm_get(Inf,'.img',str); else q = sf_bch_get_q(i); end %- P = strvcat(P,q); end else [Who/when ???}
Plotting variables
Plot fMRI time-course
Y is the fitted response. y is the adjusted data (adjusted for confounds and bandpass filtering) Raw data has to be read from the Y.mad file.
For the plot option "fitted and adjusted responses", Y and y refer to the whole timeseries. For the plot option "event- or epoch-related responses", Y and y refer to peristimulus time (effectively "averaging across trials"). The PSTH option is just a way of binning the y values into separate peristimulus timebins, allowing calculation of mean and standard errors within each timebin. There is no easy way of collapsing y across trial-types (eg plotting a contrast of event-related data), other than collapsing across trial-types in the original model (ie having one column only).
Ran into a problem when I tried to plot fitted and adjusted responses against time a couple times. Despite high significance, SPM99 indicated that no raw data had been saved at that voxel and gave me the option of moving to the closest point which had data; when I chose this option, it went to another cluster entirely. In one case, this was for the local maximum of the most significant cluster (better than 0.000 uncorrected for the voxel and for the cluster). The entire area was grayed in on the glass brain. The default for the statistics was set at 0.001 so it seemed like there should not have been a problem. Am I doing something wrong or is this a bug? For which voxels is SPM99 supposed to be saving raw data for? This apparent paradox is due to the fact that the p value for a particular T-contrast may be more significant than the default F-contrast used to decide whether to save data in Y.mad. All I can suggest is that you reduce you default threshold further. Note that you can still plot fitted responses and standard error for every voxel (but not the actual residuals thenselves unless the data are saved in Y .mad). To do this simply decline the option to 'jump'. [Karl Friston 3 Jul 2000] Plot 'contrast of parameter estimates'
I have some PET data I'm trying to interpret, but I'm not sure about what is plotted in the 'contrast of parameter estimates' when they are plotted for each condition for an effect of interest in SPM99.
This bar-plot shows the mean-corrected parameter estimates of all effects of interest. The red lines are the standard errors of the parameter estimates.
This plot seems to be representing the 'size' of the effect on interest at a given maxima - my question is how a negative value in this plot should be interpreted. Is it a 'deactivation'?
Because of the mean correction, the bar-plot shows the deviations of the parameter of interest estimates from their mean. Therefore a negative value does not necessarily mean that the parameter estimate is negative, it is just lower than the mean of the parameter of interest estimates. Note that the (non mean-corrected) parameter estimates of a given voxel are stored in the workspace variable 'beta', when you plot them. By typing beta in your matlab window, you can display them.
Or, asked another way, what does the 0 effect size mean in these plots?
It means that this parameter estimate is equal to the mean of all parameter of interest estimates. As a special case of only one parameter of interest, it would mean that this parameter is zero.
I guess that the typical use of this plot is to easily assess the relative sizes of the parameter estimates for a given voxel. You could also use this plot to extract the vector of parameter estimates (and other variables like the standard errors of the parameter estimates, the fitted and the adjusted data stored in 'SE', 'Y' and 'y') from SPM99. [Stefan Kiebel, 20 Jun 2000]
Can anyone tell me what exactly is being plotted when I choose "contrast of parameter estimates" for my plot.
This plot shows one or more linear combinations of the parameter of interest estimates, where the linear combinations are contrasts. In the case, when you specify 'effects of interest', there is one (mean corrected) contrast for each parameter such that each grey bar shows the relative height of each estimated parameter of interest. In the case that you specify one of your own contrasts, the single bar shows the estimated parameters of interest weighted by the contrast. In both cases, the red line denotes the standard error SE of the weighted parameter estimates. The range of the red line is [-SE SE]. If you like to read some informative matlab code, the possibly most exact description can be found in spm_graph.m, lines 231 - 251.
How does this relate to the fitted response?
Let the general linear model used in the analysis be Y = X * \beta + \epsilon where Y are the functional observations, X is a design matrix, \beta the parameter vector and \epsilon the error of the model. Let b be the estimated parameters. The design matrix X can be subdivided into X = [X_1 | X_2], where X_1 denotes the covariates of interest and X_2 the covariates of no interest. Equally, b = [b_1 b_2]. Let c be the contrast(s) you choose for the plot, where c is a matrix with onecontrast per column. Then c' * b is the height of the grey bar(s) plotted. Note that c is zero for all estimated parameters of no interest b_2. The fitted (and corrected for confounds) data is then given by X_1 b_1. To make it complete, the adjusted data (and corrected for confounds) is given by X_1 * b_1 + R, where R are the residuals R = Y - Xb.
In other words, the relationship between the contrast of parameter estimates and the fitted response is given by the parameter estimates. In one case you weight the parameter of interests by a contrast vector and in the other case, you project the estimated parameters (of interest) back into the time domain. [Stefan Kiebel, 21 Jun 2000]
Plot parametric responses
I want to plot parametric responses (time x condition effects) using the same scaling (e.g. from -1 to 2 with 0.5 steps) on the z-axes for different subjects . In the interactive windows "attrib" (plot controls) I can change only the x-axes (Xlim, peristimulus time) and the y-axes (YLim, time) but not the z-axes (responses at XYZ). Has someone a modified matlab script (I think spm_results_ui.m and spm_graph.m) to do this ?
One could of course modify spm_results_ui.m, but I think the shortcut for you is to change the ZLim (or any other property of the plot) directly from within matlab. To make the figure the current axes, click on your plot and then type: set(gca, 'ZLim', [-1 2]); and set(gca, 'ZTick', [-1:0.5:2]); [Stefan Kiebel 17 July 2000]
Values from plots
Is there a way to use the matlab window to obtain the values used by SPM to generate plots (contrast of parameter estimates)? I am interested in obtaining the plot values and the standard deviation.
Yes, during each plot in SPM several values are stored in workspace variables. When you plot the parameter estimates or a contrast of these, SPM writes the variables beta (vector of parameter estimates) and SE (standard error) to the workspace. If you look at spm_graph.m lines 241 - 251, you can e.g. see how SPM99 generates the bar plot based on beta and SE. [Stefan Kiebel 27 July 2000]
Back to Outline.
Segmentation
To customize the segmentation:
One of the steps is to explore [spm_sn3d.m] file. And then insert the following line at the beginning of the main routine (way below where the %'d lines end, and after the definition of global values, eg, line number 296) in spm_sn3d.m sptl_CO=0; This will direct you, when you're running normalization, to choose all the options currently available in SPM normalization. Read the descriptions in the comment lines. [Jae S. Lee 21 Jun 2000] Back to Outline
Contrast Manager
View more than 51 sessions
we have constructed a design matrix with 60 sessions. When we explore the design we are able to view only 51 sessions. Is it possible to check the other 9 sessions? Which is the matlab routine where the max number of session are specified?
Actually, it is only partially a SPM issue. Your 60 sessions are still there, the limitation is due to the inability of matlab5.3.1 to display menus with more than 51 entries on your screen. To see the other sessions as well, you could type the following in matlab after starting spm and cd to your analysis directory:
load SPM_fMRIDesMtx.mat spm_fMRI_design_show(xX,Sess,60,1)
This would show you trial 1 of session 60. Change the last two arguments to see the other sessions and trials. [Stefan Kiebel 05 Jul 2000]
I also have a programing question. When attempting to plot "contasts of parameter estimates" I am not able to view or choose from all contrasts. I have a data set with about 50 contrasts and I am only able to choose from those that fit on the screen. If I type the contrast number, SPM only allows me to enter 1-9. Is there any way to plot the data for contrasts that do not fit in the window? Yes, there is a way around... It involves some typing:
Change line 213 in spm_graph.m from Ic = spm_input('Which contrast?','!+1','m',); to Ic = spm_input('Which contrast?','+1','m',);
Before plotting type in the matlab window global CMDLINE CMDLINE = 1
The first action makes sure that you can get into command line mode and the second actually activates the command line mode. [Stefen Kiebel, 14 July 2000]
Back to Outline
Masking
Is it possible to instruct spm99 to search all voxels within a given mask image rather than all above a fixed or a %mean threshold?
Yes, with SPM99 it's possible to use several masking options.
To recap, there are 3 sorts of masks used in SPM99:
- an analysis threshold
- implicit masking
- explicit masking
1: One can set this threshold for each image to -Inf to switch off this threshold. 2: If the image allows this, NaN at a voxel position masks this voxel from the statistics, otherwise the mask value is zero (and the user can choose, whether implicit masking should be used at all). 3: Use mask image file(s), where NaN (when image format allows this) or a non-positive value masks a voxel.
On top of this, SPM automatically removes any voxels with constant values over time.
So what you want is an analysis, where one only applies an explicit mask.
In SPM99 for PET, you can do this by going for the Full Monty and choosing -Inf for the implicit mask and no 0-thresholding. Specify one or more mask images. (You could also define a new model structure, controlling the way SPM for PET asks questions).
With fMRI data/models, SPM99 is fully capable of doing explicit masking, but the user interface for fMRI doesn't ask for it. One way to do this type of masking anyway is to specify your model, choose 'estimate later' and modify (in matlab) the resulting SPMcfg.mat file. (see spm_spm.m lines 27 - 39 and 688 - 713). Load the SPMcfg.mat file, set the xM.TH values all to -Inf, set xM.I to 0 (in case that you have an image format not allowing NaN). Set xM.VM to a vector of structures, where each structure element is the output of spm_vol. For instance: xM.VM = spm_vol('Maskimage'); Finally, save by save SPMcfg xM -append
If so, does the program define a voxel to be used as one which has nonzero value in the named mask image?
Not nonzero, but any positive value and unequal NaN. Note that you can specify more than one mask image, where the resulting mask is then the intersection of all mask images. [Stefan Kiebel 27 Jun 2000]
Do I have to mask this contrast by another contrast (e.g. main effect) and how can I specify the masking contrast? You do not have to but if you wanted t; use a 2nd-level model with (Ae-Ce) in one column and (Be-Ce) in another (plus the constant term). Then mask [1 -1 0] with [1 1 1]. The latter is the main effect of Factor 1. [Karl Friston 18 July 2000]
[also see "Model Items: 3-factor design"]
For those of you wanting to specify explicit masking at the SPM (PET/SPECT) model setup stage, here's a recipe to do it without having to resort to the "Full Monty" design: Start SPM99 and paste the following into the MatLab command window:
%-Choose design class D = spm_spm_ui(char(spm_input('Select design class...','+1','m',...% {'Basic stats','Standard PET designs','SPM96 PET designs'},...% {'DesDefs_Stats','DesDefs_PET','DesDefs_PET96'},2)));
%-Choose design from previously specified class D = D(spm_input('Select design type...','+1','m','))
%-Turn on explicit masking option D.M_.X = Inf
%-Pass this design definition to SPM (PET/SPECT) spm_spm_ui('cfg',D)
[Andrew Holmes 20 July 2000]
It appears masking is a binary operation-- does this mean the mask specified must be in a bitmapped {0,1} format, or just that it is treated that way? The latter. The mask can have any numbers. If the mask image format (e.g. 'float') supports NaN, NaN is the masking value, otherwise it is 0. [Stefan Kiebel 21 July 2000]
With respect to estimating a model. I would like to potentially do an apriori mask of my collected brain. I could go in and just change all of my img files and mask explicity each one (ie zero out the non-interesting portions), however, any hints on where in the estimation code I would insert a masking to zero out the portions of the brain that I am not interested in estimating. That is, if we could we would have acquired a smaller region of volume during the scanning, but I can affect this by just masking my data before estimation.
Absolutely, if you want to assess the number of voxels above a given threshold, you can count these in the t-images. With respect to your question about the masking to effectively constrain the analysis to a ROI, you could look at http://www.mailbase.ac.uk/lists/spm/2000-06/0196.html http://www.mailbase.ac.uk/lists/spm/2000-07/0205.html which might provide a solution, how to implement your explicit masking easily (without changing each image, but just constraining the analysis to a subset of voxels). If you do an explicit masking, a script to counting voxels above threshold in a ROI wouldn't be necessary, because then you could use the cluster sizes as computed by SPM. You could also try to use a mask-image to apply the SVC.
Mask part of the brain
for the analysis of SPECT perfusion data, I would like to "crop" my images prior to statistical analysis
- that is, remove non-brain counts [scalp, sinuses, muscles]
from reading about "Mask object" in spm_sn3d, I gather spm will not do this during this step. True?
if not, is there a function available to do so?
Yes, there is a function to do exactly what you want. During the statistical analysis set-up, you can specify an explicit masking. To get to this and related masking options, you have to choose Full Monty as your design option. Then you can specify a mask-image, which could be in your case e.g. a normalized cropped image, where NaN (or 0) would mean to exclude this voxel from the analysis. You find a more detailed documentation about this type of masking in the SPM-help for PET-models. [Stefan Kiebel 19 Jul 2000] Back to Outline
Conjunction Analysis
Conjunctions are specified by holding down the'control' key during contrast selection.
Pixel Coordinates
Get pixel coordinates for all voxels within an activated cluster
One easy way would be to position the cursor on the cluster you're interested in (after displaying the results using the 'results' button), and paste the following lines from spm_list.m at the matlab prompt:
[xyzmm,i] = spm_XYZreg('NearestXYZ',... spm_results_ui('GetCoords'),SPM.XYZmm); spm_results_ui('SetCoords',SPM.XYZmm(:,i)); A = spm_clusters(SPM.XYZ); j = find(A == A(i)); XYZ = SPM.XYZ(:,j); XYZmm = SPM.XYZmm(:,j);
The last two variables - XYZ and XYZmm - would contain the pixel and the mm coordinates of all voxels in the current cluster. (Check the cursor to see where it is after pasting the above, it may jump a bit, moving to nearest suprathreshold voxel.) [Kalina Christoff 25 Jun 2000]
You could also use spm_regions in 'results' (VOI)
help spm_regions
VOI time-series extraction of adjusted data (local eigenimage analysis) FORMAT [Y xY] = spm_regions(SPM,VOL,xX,xCon,xSDM,hReg);
SPM - structure containing SPM, distribution & filtering detals VOL - structure containing details of volume analysed xX - Design Matrix structure xSDM - structure containing contents of SPM.mat file xCon - Contrast definitions structure (see spm_FcUtil.m for details) hReg - Handle of results section XYZ registry (see spm_results_ui.m)
Y - first eigenvariate of VOI xY - structure with: xY.name - name of VOI xY.y - voxel-wise data (filtered and adjusted) xY.u - first eigenvariate xY.v - first eigenimage xY.s - eigenimages *** xY.XYZmm - Co-ordinates of voxels used within VOI *** xY.xyz - centre of VOI (mm) xY.radius - radius of VOI (mm) xY.dstr - description of filtering & adjustment applied
Y and xY are also saved in VOI_*.mat in the SPM working directory [Karl Friston 26 Jun 2000] Back to Outline
Global Normalization
See mean global estimates for individual raw scans.
load SPMcfg.mat plot(xGX.rg)
Change the threshold for global normalization.
If you want to try different thresholds, then you need to modify line 57 of spm_global.c, and then recompile. The modification would involve something like changing from: s1/=(8.0m); to: s1/=(4.0m); Back to Outline
Images
Displaying t-statistic images
| 1) This may be a really idiotic question, but how does one view the | uncorrected t-statistic images? I'm assuming that viewing the t-statistic | images for a given contrast using the default values: "corrected height | threshold = no", "threshold = 0.001", and "extent threshold | = 0" still applies a correction that is based on the the | smoothness estimates and consequently the number of resels.
This displays the raw uncorrected t statistics that are more significant than p<0.001. There is no correction for the number of resels when you dont specify a corrected height threshold.
Another way of displaying the statistic images is to use or . [John Ashburner 21 Jun 2000]
Saving Images
I'm performing a manual rotation and I don't know how to save the rotated image. Use the display button. Your image will come up in the graphics window. Use the gray boxes to the left and below the image to alter the orientation, then, when you are happy with the result, press the reorient images button in the same window. spmget will launch. Select the images you want to be rotated (the image you have been working on +/- any others), and the changes to the orientation will be written out in a *mat file. [Alex Leff 19 July 2000] Back to Outline.
Printing
Print results to a text-file
A right click in the background of an SPM results table brings up a context menu including options to "Print Text Table" and "Extract Table Data Structure". The first prints the table as plain text in the Matlab command window, the second returns the table data structure to the base matlab workspace (as 'ans'). See the help for spm_list.m for further details (also available from the table context menu as "help").
Img Calc Hints
| I'd like to create, for each individual subject, a subtraction image that | reflects %change in normalized rCBF. Thus, instead of t-values, the pixel | values of this image would be numbers reflecting change above or below | average whole brain. In my particular case, I have two baselines and two | activations, so I'd like to create the percent change subtraction image of: | (i1+i3)/2 - (i2+i4)/2. | | Is there a way to easily accomplish this in SPM? As far as I can tell, | proportional scaling only comes as part of a process that produces a | statistical parametric map image, and I don't see anything in the image | calculator that would enable me to perform this step separately (i.e., take | an image, normalize each pixel by whole brain average, and then do the | subtractions).
In Matlab, you can obtain the "globals" for each image by: V = spm_vol(spm_get(4,'*.img')); gl1 = spm_global(V(1)) gl2 = spm_global(V(2)) gl3 = spm_global(V(3)) gl4 = spm_global(V(4))
Then these can be plugged into the ImCalc expression by: (i1/gl1+i3/gl3)/2 - (i2/gl2+i4/gl4)/2
I think you actually need to enter the values of the globals rather than the variable names. [John Ashburner 11 Aug 2000]
Back to Outline.
Memory (RAM) management hints
If you have problems with SPM halting, and perhaps with your Matlab session also quitting, type the following before entering Matlab: unlimit stacksize N.B. this only works on a Unix machine.
Back to Outline.
- Model Items
1 subject compared to controls
| 1. How can SPM best be used to compare a single subject to a group of | controls in order to establish the pattern of regional abnormalities? I | have tried using the two sample t-test, two groups, one scan per subject | model, with success, but was wondering if anyone had ideas about other | approaches using the software.
This is probably the best approach, but it may be worth also modelling confounding effects such as age or possibly nonlinear age effects (by also including age2 and age3).
Depending how many controls you have, you may also wish to try a non-parametric analysis using SNPM. [John Ashburner 13 July 2000] Back to Outline.
1 group, 2 conditions
number of conditions or trials : 1 (is this correct? Should I enter "2"?)
Yes. With one condition alternating with rest it is appropriate to model the rest implicitly by specifying just the active condition onsets. To use 2 conditions would not be wrong, but is redundant.
Results button -> I set default value for mask, threshold and so on. I set t-contrast "1 -1" or "-1 1", is it correct? I want to z-score, which is (mean(rest)-mean(activation))/SE, but the different options give different z-scores.
This is what you are doing wrong I think. You specified one condition so have two columns in the resulting design matrix. One represents the boxcar (activation vs rest), the other is a constant term modeling the mean activity over all conditions. Your t-contrasts are comparing these two regressors, which will give weird results.
What you should do is use contrasts [1] or [-1] to see areas where activation>rest, or rest>activation respectively. If you had used two regressors to model activation and rest separately then the corresponding contrasts would be [1 -1] and [-1 1]. [Geraint Rees 25 July 2000] Back to Outline.
1 group, 2 conditions, 1 covariate
PET/SPECT models: Multi-subject, conditions and covariates
| I'm trying to do simple correlations with SPM99..will someone please | help me, this should be very simple. | | I have 2 PET scans per subject, one at baseline and one on drug. I | have 2 clinical rating scores, one at baseline and one after drug. | I want to look at increases in GMR after drug correlated with | increases in the clinical rating. I also want to look at negative | correlations. What model should I use and how do I define the | contrasts??
PET/SPECT models: Multi-subject, conditions and covariates. For each subject, enter the two scans as baseline and then drug. One covariate, values are the clinical rating scores in the order you selected the scans, i.e. baseline score for subject 1, drug score for subject 1, baseline score for subject 2, drug score for subject 2, &c. No interactions for the covariate. No covariate centering. No nuisance variables. I'd use proportional scaling global normalisation, if any. (You could use "straight" Ancova (with grand mean scaling by subject), but SPM99 as only offers you AnCova by subject, which here would leave you with more parameters than images, and a completely unestimable model).
Your model (at the voxel level) is:
[1] Y_iq = A_q + C * s_iq + B_i + error
...where: Y_iq is the baseline (q=1) / drug (q=2) scan on subject i (i=1,...,n) A_q is the baseline / drug effect s_iq is the clinical rating score C is the slope parameter for the clinical rating score B_i is the subject effect
...so the design matrix has: 2 columns indicating baseline / drug 1 column of the covariate n columns indicating the subject You will have n-1 degrees of freedom. Taking model [1] and subtracting for q=2 from q=1, you get the equivalent model:
[2] (Y_i2 - Y_i1) = D + C(s_i2-s_i1) + error
...where D = (A_2 - A_1), the difference in the baseline & drug main effects. (Note that this only works when there are only two conditions and one scan per condition per subject!) I.e. a simple regression of the difference in voxel value baseline to drug on the difference in clinical scores, exactly what you want.
Entering [0 0 1] (or [0 0 -1] as an F-contrast will test the null hypothesis that there is no covariate effect (after accounting for common effects across subjects), against the alternative that there is an effect (either positive or negative. I.e., the SPM will pick out areas where the difference baseline to drug is correlated with the difference in clinical scores.
[0 0 +1] and [0 0 -1] as t-contrasts will test against one sided alternatives, being a positive & negative correlation (respectively) of baseline to drug scan differences with difference in clinical scores. Since you're interested in both, you should interpret each at a halved significance level (double the p-values). This will give you the same inference as the SPM (which is the square of the SPM's), but with the advantage of separating +ve & -ve correlations in the glass brain for you.
Incidentally, the variance term here incorporates both within and between subject variability, and inference extends to the (hypothetical) population from which you (randomly!) sampled your subjects from. [Andrew Holmes, when ???]
Given 2 conditions, 1 scan/condition, 1 covariate obtained at each scan, mean-centered covariate with proportional global scaling. A condition & covariate design with a contrast 0 0 1 is equivalent to correlation between the change in covariate and the change in the scans.
Indeed or more precisely the partial correlation between the covariate and scan-by-scan changes having accounted for the condition-specific activations. [Karl Friston 28 Jun 2000]
I have a SPECT study with 34 patients and 2 conditions per patients and 1 covariate. I want to find the regions where there is positive corelation between the rise in blood flow from the first scan to the second scan with the covariate. I centered the covariate around 0 and used a new covariate of +a/2,-a/2 as a new covariiate as recommended by Andrew Holmes.
Could anyone please explain to me what would the difference be in this case, if I use the "Multi subject covariate only" design or a "Multi subject condition and covariate design" and use a [0 0 1] contrast.
If you use 'Multi subject condition and covariate design', the model expresses your assumption that each series of observations in a voxel (over subjects) can be explained by subject effects, condition effects (which are the same for all subjects) and by your covariate.
If you choose 'Multi subject covariate only', you express your belief that there is no need to model a condition effect, but that your covariate alone times the estimated slope is a good explanation for your observations.
So the difference between the two models is that in the first you model some additive condition effect commonly observed over all subjects. [Stefan Kiebel 25 July 2000]
Back to Outline.
1 group, 2 conditions, 1 covariate, 2 nuisances
I will first start with what we have: Within an fmri study, One group Five subjects Two conditions Auditory Monitoring versus its own baseline Working Memory versus its own baseline Two nuisance variables anxiety score (one score per subject) Depressive mood score (one score per subject)
One covariate of interest error score on the working memory task This is what we did Design Description Desgin: Full Monty Global calculation: mean voxel value (within per image fullmean/8 mask) Grand Mean scalingL (implicit in PropSca global normalization) Global normailzation: proportional scaling to 50 Parameters: 2 condition, +1 covariate, +5 block, +2 nuisance 10 total, having 7 degrees of freedom leaving 3 degrees of freedom from 10 images
Is this a valid way of looking at this? We are concerned with the large degrees of freedom that we are using up. Also how would we accurately interpret such a model? Does the statistical map only represent activations that are associated with the covariate of interest after controlling for anxiety and depression scores?
Firstly I assume this is a second level analysis where you have taken 'monitoring' and 'memory' contrasts from the first level. If this is the case you should analyse each contrast separately. Secondly do not model the subject effect: At the seond level this is a subject by contrast interaction and is the error variance used for inference. Thirdly a significant effect due to any of the covariates represents a condition x covariate interaction (i.e. how that covariate affects the activation).
I would use a covariates only single subject design in PET models (for each of the two contrasts from the first level). A second-level contrast testing for the effect of the constant term will tell you about average activation effects. The remaining covariate-specific contrasts will indicate whether or not there is an interaction. [Karl Friston 17 July 2000]
Back to Outline.
1 group, 2 conditions, 3 levels/condition
we're attempting to conduct a parametric analysis. We have two condition A(experimental condition) B(control condition); in the experimental condition the parameter assumes 3 different values.
- Which is the difference between choosing a polynomial or a linear relationship in the model?
A linear model is simply a polynomial model with 0th and 1st order terms. Any curvilinear relationship between evoked responses and the parameter of interest would require 2nd or higher order terms to be modeled.
- In the results session how can we specify the contrast for the AB difference? and for the parameter effect on the experimental condition?
Simply test for the [polynomial] coefficients one by one. The 0th order term (e.g. [1 0 0]) models the mean difference between A and B averaged over the three levels. The 1st order coefficient (e.g. [0 1 0]) reflects the linear dependency on the parameter and the 2nd (e.g. [0 0 1]) or higher reflect the nonlinear components. The 0th order term is the conventional box-car or condotion-specific effect modeled in simple, non-paramteric analyses. Note that because you only have 3 levels in condition A a 2nd order model is the highest you would consider (- a parabola can join three points together). [Karl Friston 29 Jun 2000]
1 group, 4+ conditions
I chose the multi-subject: conditions and covariates design, for PET scan. The four scans/conditions (A,B,C,D) were entered in time-order. I want to compare only two conditions (e.g. B and C) in the analysis. Do I have to put the other conditions on 0 while defining contrasts (i.e. 0,-1,1,0), or do I have to make another spm.mat file in which only the two conditions I want to compare are taken and define contrasts as 1,-1? Is there a difference between the two ways?
The first solution is the more appropriate one. One should generally try to specify one design matrix modelling all observations and then use the one estimated parameter set to compute all the contrasts.
The difference between the two solutions is that in the first case you make the assumption that it is valid to use all scans for estimating the variance under the null hypothesis, even if a contrast vector element is zero for the associated basis function/condition. This is a valid assumption for a fixed effects PET analysis. As a result, you have more degrees of freedom in your statistical test at each voxel such that the analysis is more sensitive to the underlying signal. [Stefan Kiebal, when ???]
Further to my question to you earlier this week which was:
- My paradigm is a block design with 4 different active blocks each followed by it's respective null block, i.e I have 4 different null blocks. How do I go about specifying the design matrix for a second level analysis taking these different null blocks into account? e.g. if my 4 active blocks are: A1 A2 A3 A4 and my 4 null blocks are: N1 N2 N3 N4
If I specify trials 1-8 in the following order: A1 N1 A2 N2 A3 N3 A4 N4
how do I contrast [A1-N1] - [A2-N2]? or vice versa?
Your answer was: A.You would simply specify 8 conditions (A1 - N4) and use the appropriate contrasts. Unfortunately, 'use the appropriate contrasts' is the bit we don't know how to do now that we have so many different nulls.
I have specified the conditions 1-8:A1 N1 A2 N2 A3 N3 A4 N4 for simple contrast A1-N1 I've used:1 -1 0 0 0 0 0 0 & for contrast A2-N2 I've used: 0 0 1 -1 0 0 0 0 How do I specify a 2nd level contrast looking at the activity in A1 minus it's null N1 versus the activity in A2 minus it's null N2, i.e. [A1-N1] - [A2-N2]?
If I use:A1 N1 A2 N2 A3 N3 A4 N4 1 -1 1 -1 0 0 0 0 then surely this is just adding the activity in A1 and A2 and taking away the activity in N1 and N2 which is not what we want to do.
In fact this is [A1-N1] - [A2-N2] and is exactly what you want. I think the confusion may be about the role of the 2nd-level analysis. To perform a second level analysis simply take the above contrast [1 -1 1 -1 0 0 0 0] and create a con???.img for each subject. You then enter these images into a one sample T test under 'Basic Designs' to get the second-level SPM. To do this you have to model all your sujects at the first level and specify your contrasts so that the effect is tested in a subject-specific fashion:
i.e. subject 1 [1 -1 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 ... subject 2 [0 0 0 0 0 0 0 0 1 -1 1 -1 0 0 0 0 ... ... [Karl Friston 19 July 2000]
how do I contrast [A1-N1]-[A2-N2]? or vice versa? i.e perform a 2nd order contrast. The first issue is exactly what question you are asking. [A1-N1] vs [A2-N2] looks like an interaction, and I think that this is what you are after. You can think of it as comparing the 'simple main effect' Ax-Nx in two contexts, x=1 and x=2. Put another way, the interaction is the 'A-specific activity' in context 1 compared with the 'A-specific activity' in context 2, each being compared with its own baseline. Let me know if this is not what you need.
Would the appropriate contrast be A1-N1-A2+A1?
No, it would be A1-N1-A2+N2 (I suspect that this is what you meant and that the A1 on the end is just a typo). Thus with your covariates ordered as specified above, your contrast will be 1 -1 -1 1 0 0 0 0). As Karl pointed out (but for a different contrast), you now need to perform this contrast on each of your subjects, within the 'fixed effects' design matrix:
Subject 1: 1 -1 -1 1 0 0 0 0 0 0 0 0 0 0 0 0 ... Subject 2: 0 0 0 0 0 0 0 0 1 -1 -1 1 0 0 0 0 ... etc.
Each contrast image generated (i.e. one for each subject) gets entered into a one-sample t test in the 'second level' analysis. The question which you now ask of every voxel is whether its value departs significantly from zero (which is its expected value under the null hypothesis).
Incidentally it may be worth just mentioning an alternative (less good) approach, which I suspect that you might have been considering. (You can ignore this bit if you like.) You could specify the simple main effect contrasts A1-N1 and A2-N2, and test for the difference between them. Thus your contrasts would be
Subject 1 (A1-N1): 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... Subject 1 (A2-N2): 0 0 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 ... Subject 2 (A1-N1): 0 0 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 ... Subject 2 (A2-N2): 0 0 0 0 0 0 0 0 0 0 0 1 -1 0 0 0 ...
In this case, the second level analysis would test whether the A1-N1 contrasts, as a population, are significantly greater than the A2-N2 contrasts. The reason why this is less good than the first approach outlined above is that it is equivalent to an unpaired t test (in which you just compare an 'A1-N1' population with an 'A2-N2' population) whereas your data are obviously paired (i.e. each A1-N1 estimate goes with the A2-N2 estimate for the same subject).
However, if you can do a paired t test, then as I understand it the result should be exactly the same as the first analysis - I've never tried this so I don't know if it is possible within SPM99. [Richard Perry 20 July 2000]
We have conducted a Working Memory study in H2O-PET with the following design:
8 subject 4 Conditions: A: WM 1 (high load) B: WM 1 (low load) C: WM 2 (high load) D: WM 2 (low load) 3 Scans/per condition/subject
Thus we have a total of 96 scans. To look for the main effect of Working memory and for the domain specific effect we have chosen the Multi-subject: cond x Subj interaction & covariates design. We find nice WM main effects and also interesting domain specific effects.
Experimental question: We are interested if there is a correlation between performance (as measured by RT) and WM-specific activation.
What kind of design do we have to choose? As we work mainly with fMRI-studies we first thought of a second level analysi