-
Notifications
You must be signed in to change notification settings - Fork 27
Reporting methods and results
When using LIMO tools, we recommend using the following text:
Statistical analyses were performed using a hierarchical linear model approach, as implemented in the LIMO EEG toolbox (Pernet et al., 2011). At the first level (subjects) a general linear model was set up with N regressors, processing all subjects single trials automatically (Bellec et al. 2012). The model corresponded to [describe the regressors/conditions]. Parameter estimates were obtained using trial based Weighted Least Squares (Pernet et al. 2021) / using Ordinary Least Squares. At the second level (group), a robust [name the test used here] test was used testing [describe the experimental effect(s) tested]. Results are reported corrected for multiple testing using [maximum statistics/spatial-temporal clustering/TFCE] (Pernet et al., 2015)
If you use the EEGLAB STUDY integration (with/without BIDS) - a slight modification is possible, further acknowledging this line of work:
Statistical analyses were performed using a hierarchical linear model approach, as implemented in the LIMO EEG toolbox (Pernet et al., 2011). At the first level (subjects) a general linear model was set up with N regressors: using variables imported in EEGLAB STUDY from BIDS structured data (Pernet et al., 2019), extracting and processing all subjects single trials automatically (Pernet et al., 2021, Bellec et al. 2012). The model corresponded to [describe the regressors/conditions]. Parameter estimates were obtained using trial based Weighted Least Squares (Pernet et al. 2021) / using Ordinary Least Squares. At the second level (group), a robust [name the test used here] test was used testing [describe the experimental effect(s) tested]. Results are reported corrected for multiple testing using [maximum statistics/spatial-temporal clustering/TFCE] (Pernet et al., 2015)
Independently of the statistical method and significance observed, it is essential to report the observed effects. By nature, we can distinguish hypothesized from discovered effects. You performed an experiment, and most of the time you have some expectations on where, when, at which frequency effects should appear. Consequently, you should always report the effect size for such hypothesis. This conceptually differs from effect size from 'regions' (space/time/frequency) showing a statistical effect, but not hypothesized.
The easiest way to report this is to get the statistics from plotting differences. I recommend reporting the effect for both the raw data (in uV) and the beta values, with the 95% CI.
Sometimes you have some expectations on where, when, at which frequency effects should appear but it turns out there is nothing significant. As argued above, you should still report the effect - importantly because the LIMO MEEG toolbox computes a Bayesian bootstrap, showing that the difference includes 0 and/or all conditions overlap can be taken as evidence that there is no effect (rather than not being able to reject the hypothesis of no effect).
p-values are a conditional probability on the null hypothesis and depend on effect and sample sizes - they, therefore, do not reflect the strength of evidence. For that reason, while available, they are not useful to report.
First, let me remind you that all three procedures control well the type 1 family-wise error rate, ie the probability to make at least 1 error under the null. Second, since we are talking here of data with one effect or more (i.e. non-null data), results differ between procedures because they have different power (probability to find an effect when there is one).
Let's say you have an effect over several electrodes across several time frames, imaging some N170 effects. How would you report this?
- If you use maximum statistics and since the thresholding applies to each cell in the data matrix, you can say something like 'a significant difference of X microvolts on average was observed over channels A, B, C, D starting at Xms and ending at Yms (maximum X microvolts 95%CI [x1 x2] on electrode B at Zms, mean beta values Y1 vs Y2, p=0.0X corrected for multiple comparisons using maximum statistics)'.
- If you use clustering and cannot really report which cells are significant because the statistics apply to clusters. Your inference, and reporting of this, should, therefore, be along the lines of 'a significant difference of X microvolts on average [minimum 95%CI maximum 95%CI] was observed for a cluster encompassing channels A, B, C, D starting at Xms and ending at Yms (mean beta values x1 vs x2, p=0.03 corrected for multiple comparisons using spatiotemporal clustering with a cluster forming threshold of p=0.05)'.
- If you use tfce you cannot really report which cells are significant either since, again, the thresholding applies to transformed data for which clusters are used. The right interpretation (as much as 'we' can agree one this see Threshold Free Cluster Enhancement Explained) is that for an observed effect there exists at least one cluster-threshold that is significant. Reporting could therefore be along the lines of 'a significant difference of X microvolts on average [minimum 95%CI maximum 95%CI] was observed (mean beta values x1 vs x2, p=0.0x corrected for multiple comparisons using tfce) over channels A, B, C, D and starting at Xms and ending at Yms'. Because, under TFCE there are clusters, often you can see regional effects. Again, we cannot really talk about clusters. An option is to state you observed N regional effects, a 1st set over channels A, B, C, D from time X to Y (effect size, etc), a second set over channels D, F, J from time Z to W, etc.
Downsampling or not before analyzing
Defining conditions defining
~ categorical.txt ~continuous.txt
EEGLAB-STUDY: run, session, condition and group
Basic Stats: LIMO tests and CI
Repeated measures ANOVA
Results in the workspace
Results in LIMO.cache
Checking data under the plots
Reordering plots
Compute & Plot conditions
Compute & Plot differences
Channel neighbourhood
Editing a neighbourhood matrix
Scripting 1st level
Debugging 1st level errors
Skip 1st level
Scripting 2nd level
Getting stats results with a script