Skip to content
Neal W Morton edited this page Apr 8, 2019 · 4 revisions

TODO: make data for this example publicly available, test everything, embed sample plots.

Calculating event-related potentials

In this example, we have a set of patterns for each subject that are named 'vss_emc', with voltage at a number of study epochs. We went to get an average for events that are subsequently recalled and subsequently not recalled. In the events for each pattern, we have a field called 'recalled' that is either 0 (not recalled) or 1 (recalled). First, for each subject, we calculate the average for every different value of the recalled field:

exp.subj = apply_to_pat(exp.subj, 'vss_emc', @bin_pattern, ...
                        {'eventbins', 'recalled', 'save_as', 'temp', ...
                         'overwrite', true});

The bin_pattern function is used for calculations that bin information across events (or other dimensions, such as averaging across channels or time). By default, it calculates averages, but it can also be used for other summary statistics such as standard deviation. Here, we specify 'eventbins' defined by the 'recalled' event field; this will calculate an average for each different value of the recalled field. We'll then have an average of all events with recalled==0 and all events with recalled==1.

Note that we could have saved this with any name we want using the 'save_as' input; here, we call it 'temp' to indicate that this is a temporary pattern that we're okay with overwriting in the future. We also use the 'overwrite' parameter to indicate that any existing patterns with that name should be overwritten. In general, Aperture will not overwrite existing patterns unless we specifically say otherwise.

Creating a group pattern

We've now created average patterns for each individual subject. Often we want to next test whether an effect is reliable across subjects. Here, we'll use a paired t-test to test, for each channel and time point, whether there is a voltage difference during study based on whether an item will later be recalled or not recalled.

First, we must make a new pattern that includes the averages for each participant. This will include all channels and time points, but only has two "events" for each participant: their recalled-item average and their forgotten-item average. Like with bin_pattern, we call a general-purpose function that can concatenate patterns along any dimension. Usually this is used to concatenate events (here, the average values for recalled and forgotten items), but it can also be used to concatenate along other dimensions like channels.

pat = cat_all_subj_patterns(exp.subj, 'temp', 'ev', 'save_mats', false);

We first indicate the subjects to include, followed by the name of the pattern to concatenate. We then indicate the dimension to concatenate along (here, 'ev' to indicate the events dimension). Setting 'save_mats' to false indicates that we don't want to save this new pattern to disk, but instead to just keep it in memory for now.

Group-level statistics

Now that we have a pattern with data from all subjects, we can use pattern_statmap to run a statistical test on each channel and time point. Type help pattern_statmap to see all of the options for this function. It's a little complicated to call, but very powerful as it can run many different types of statistical tests. It's also designed to be extensible; by writing your own function in the correct template (see f_stat in the pattern_statmap documentation), you can design virtually any statistical test you want and have Aperture manage running the test on each channel and time point in your pattern, saving in a standard format, and plotting the results using topo plots, ERP plots, etc.

In this example, we'll use the existing stat_paired_ttest function, which takes two inputs: the subject code and a grouping variable. Here, we use the 'subject' field; in our patterns, the events have a field indicating the subject code for each participant. As before, we also indicate the 'recalled' field. These event fields indicate that we should calculate the difference between the recalled and forgotten items, separately for each subject.

Next, we specify the function to run. We can pass any function we want by prepending @ to it, so here we use @stat_paired_ttest. This function will be run on every combination of channels and time points in the pattern. The next input allows us to add optional additional inputs to the stats function; here, we don't need to add anything else, so we just use {} to indicate no extra inputs.

Finally, we specify a stat_name. This is similar to the names that we have to indicate patterns; it can be anything you want, and specifies a label to access statistical maps later. It should be unique within a given pattern to avoid conflicts. Here, we'll use 'sme', short for 'subsequent memory effect'.

pat = pattern_statmap(pat, {'subject' 'recalled'}, @stat_paired_ttest, {}, 'sme');

Topographical significance plot

If our pattern has electrode location information, we can make a topo plot of the results. We'll use the 'event_bins' input to indicate that we want to display average voltage at each electrode. We also specify a 'fig_name' to store figure information under. Finally, we specify that we want to plot statistics calculated in the last step and saved under 'sme'. We use the multiple-comparisons correction method of 'fdr' when determining significance. This will determine which electrodes are marked as significant in the plot.

pat = pat_plottopo(pat, 'erptopo_sme', 'event_bins', 'recalled', ...
                   'stat_name', 'sme', 'correctm', 'fdr');

If we have multiple time points, then a separate topo plot will be created for each time point. See also pat_erp, which creates a separate plot for each channel.