-
Notifications
You must be signed in to change notification settings - Fork 1
/
eegERP.Rmd
1578 lines (1253 loc) · 71.9 KB
/
eegERP.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "EEG and ERP Processing and Analysis"
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
echo = TRUE,
error = TRUE,
comment = "")
```
# Extracting Behavioral Data from `E-Prime`
1. Run E-Merge
1. Navigate to the folder with the task data
1. Select `Merge`
1. Select `Recursive Merge`
1. Only merge: `All specified files regardless of merge status`
1. Save target file (`filename.emrg2`)
1. Double click the newly saved file (`filename.emrg2`) to open it in E-DataAid
1. Go to `File`, `Export`
1. Make sure `SPSS` is selected and that the `Unicode` option is **unselected**: https://mattkmiecik.com/posts/post-Stop-Using-Excel-to-Preprocess-E-Prime-Data/post-Stop-Using-Excel-to-Preprocess-E-Prime-Data.html (archived at https://perma.cc/N734-4W9X)
1. Save the file as: `filename.txt`
# ERP Processing Pipeline using HAPPE
1. Download MATLAB: https://its.uiowa.edu/matlab
- Click `How to Install MatLab` and follow the relevant instructions
- Create a `MatLab` folder in your local directory.
You will keep all of your MatLab related files in this folder.
1. Install the HAPPE pipeline: https://github.com/PINE-Lab/HAPPE
- Clone the HAPPE repository to your `GitHub` account
- Open the `HAPPE` folder
- Open the `HAPPE User Guide` document
- Read through the `HAPPE User Guide`
- Navigate to the `Setting up HAPPE` section in the user guide and follow the instructions for setting up the HAPPE pipeline, including installation of add-ons and eeglab
1. Install EP Toolkit: https://sourceforge.net/projects/erppcatoolkit/
- After downloading, copy the `EP_Toolkit` folder to your `Matlab` folder (in your local directory)
- In the `EP_toolkit` folder:
- Open `EP_Toolkit`
- Open `Documentation`
- Open `tutorial`
- In the `tutorial` document, navigate to the `Set Up` section and follow the instructions for installing and setting up EP Toolkit and FieldTrip.
Do **NOT** follow instructions for setting up EEGLAB.
You have already set up your path to EEGLAB when you set up the HAPPE pipeline.
- You should have the following subfolders in your `MatLab` folder:
- `EP_Toolkit`
- `Fieldtrip-[version number]`
1. Open the HAPPE pipeline V4 script in MATLAB
- Follow the User Guide instructions for the HAPPE-ER pipeline, using the user inputs described below.
- Alternatively, open the [MATLAB processing file](#matlabScripts) that runs the HAPPE-ER pipeline and deals with the output.
- e.g., `/Data Processing/6. MATLAB EEG Pipeline/eegProcessingOddball.m`
## Oddball Task
1. User Inputs
- Enter the path to the folder containing the dataset.
- Select `raw`
- Load pre-existing set of input parameter:
- `N` if this is your first time running data through the pipeline.
- `Y` if you have decided on a set of parameters.
Enter the path to the folder containing the input parameters.
- Low density data: `N`
- Data type: `task`
- Performing event-related potential (ERP) analysis: `Y`
- Enter the task onset tags
- Target: `tgt+`
- Frequent: `frq+`
- `done`
- Do multiple onset tags belong to a single condition? `N`
- File format: `5`
- Acquisition layout type: `2`
- Number of channels: `128`
- Do you have additional type fields besides "code"? `N`
- Select channels of interest: `all`
- Frequency of electrical noise in Hz: `60`
- Are there any additional frequencies, (e.g., harmonics) to reduce? `N`
- Line Noise reduction method: `notch`
- Low cutoff: `59`
- high cutoff: `61`
- Resample: `N`
- Filter
- Low Pass Cutoff: `30`
- High Pass Cutoff: `.1`
- Choose a filter: `fir`
- Bad Channel Detection: `Y`
- `after` wavelet thresholding
- ECGone: `N`
- Wavelet Thresholding
- `default`
- Threshold rule: `hard`
- MuscIL: `N`
- Segmentation: `Y`
- Starting parameter for stimulus: `-200`
- Ending parameter for stimulus: `1000`
- Task offset: `2`
- Baseline Correction: `Y`
- Baseline Correction start: `-200`
- Baseline Correction end: `0`
- Interpolation: `Y`
- Segment Rejection: `Y`
- Segment Rejection Method: `amplitude`
- minimum segment rejection threshold: `-150`
- maximum segment rejection threshold: `150`
- segment rejection based on all channels or ROI: `all`
- Re-referencing: `Y`
- Does your data contain a flatline or all zero reference channel? `N`
- re-referencing method: `average`
- Save format: `1`
- Visualizations: `N`
- Parameter file save name: `default`
## Fish/Shark
1. User Inputs
- Enter the path to the folder containing the dataset.
- Select `raw`
- Load pre-existing set of input parameter:
- `N` if this is your first time running data through the pipeline.
- `Y` if you have decided on a set of parameters.
Enter the path to the folder containing the input parameters.
- Low density data: `N`
- Data type: `task`
- Performing event-related potential (ERP) analysis: `Y`
- Enter the task onset tags
- Correct Go: `cGo++`
- Incorrect Go: `xGo++`
- Correct NoGo: `cNoGo++`
- Incorrect NoGo: `xNoGo++`
- `done`
- Do multiple onset tags belong to a single condition? `N`
- File format: `5`
- Acquisition layout type: `2`
- Number of channels: `128`
- Do you have additional type fields besides "code"? `N`
- Select channels of interest: `all`
- Frequency of electrical noice in Hz: `60`
- Are there any additional frequencies, (e.g., harmonics) to reduce? `N`
- Line Noise reduction method: `notch`
- Low cutoff: `59`
- high cutoff: `61`
- Resample: `N`
- Filter
- Low Pass Cutoff: `30`
- High Pass Cutoff: `.1`
- Choose a filter: `fir`
- Bad Channel Detection: `Y`
- `after` wavelet thresholding
- ECGone: `N`
- Wavelet Thresholding
- `default`
- Threshold rule: `hard`
- MuscIL: `N`
- Segmentation: `Y`
- Starting parameter for stimulus: `-200`
- Ending parameter for stimulus: `1000`
- Task offset: `17`
- Baseline Correction: `Y`
- Baseline Correction start: `-200`
- Baseline Correction end: `0`
- Interpolation: `Y`
- Segment Rejection: `Y`
- Segment Rejection Method: `amplitude`
- minimum segment rejection threshold: `-150`
- maximum segment rejection threshold: `150`
- segment rejection based on all channels or ROI: `all`
- Re-referencing: `Y`
- Does your data contain a flatline or all zero reference channel? `N`
- re-referencing method: `average`
- Save format: `1`
- Visualizations: `N`
- Parameter file save name: `default`
## Stop-Signal
# MATLAB Scripts to Manage HAPPE Files {#matlabScripts}
We have scripts for each task that can prepare files for the HAPPE Pipeline and/or manage the files outputted from HAPPE.
These actions can be done manually as well, but the MATLAB scripts make the process more efficient.
The scripts will also generate a "log" of all of the files processed through HAPPE to facilitate tracking of EEG data processing.
The sections below detail the code used to perform these actions as well as the instructions for using the current scripts.
Note: Before using the scripts/code detailed below, ensure that all filepaths used are in your MATLAB path collection.
These may include:
- The location where the automatic script is stored (for our lab, this is under `/Data Processing/6. MATLAB EEG Pipeline`)
- The location where the HAPPE pre-processing script is stored
- The location of the raw data (to be processed)
- The location(s) of any intermediate files for processing (e.g., the updated .mff files that contain accuracy information in FishShark)
- The location(s) for any files outputted by HAPPE and/or places you wish to use the script to move them to
## Oddball (Pre-HAPPE)
1. Update all thresholds and filepaths in script file (must be done BEFORE running the script)
- In the second section of our script file, we set our "threshold" for the minimum number of trials that need to be retained after pre-processing for a subject's data to be eligible for PCA.
Additional thresholds can also be set for things like number of channels retained, but these are not currently in use.
```
% Set quality threshold parameters
trialCutoff = 10;
```
- We also set environment variables with all of the filepaths that are relevant for managing HAPPE output files and tracking processed data.
The following paths should be checked and updated as necessary to reflect the organization of processing on your computer.
- `passPath` is the location you wish to have files that meet or exceed the above-defined thresholds to be saved
- `allPath` is the location you wish to have ALL files outputted from HAPPE saved to (regardless of whether threshold is met or not)
- `failPath` is the location you wish to have files that do not meet the above-defined thresholds to be copied to
- .mff files that do not meet threshold will be copied here as an indication that they should be processed manually to see if they meet threshold afterward
- `summPath` is the location you wish to save the file that lists all files processed through HAPPE in the current batch
- We currently use this to save the "processing log" to a location that all team members/computers have access to so it is easier to determine which files require processing when EEG data are not stored on a shared server
```
% Set paths for file sorting
passPath = 'V:\Processing-Repo\Folder Structure\3 - Files for PCA'; %location for .txt output files
allPath = 'V:\Processing-Repo\Folder Structure\2 - Processed Files'; %location for all processed files to end up
failPath = 'V:\Processing-Repo\Folder Structure\1b - Manual Processing'; %location to copy unsuccessful .mff to for manual process
% Set path for processing summary
summPath = 'Z:\Shared Server\Study Folder\Data Processing\6. MATLAB EEG Pipeline\Processed Data Logs';
```
1. Run the HAPPE Pipeline
- This first section is designed to rely on user input.
Press "Run" on the MATLAB editor window with the file open to begin the process.
- A message will appear in the console prompting you to enter the filepath to the location of the HAPPE pre-processing file you wish to run
- Once the path is entered, MATLAB will run the file
```
% Set path to HAPPE pre-processing script
happeRun = input('Enter the full path to the HAPPE pre-processing file:\n> ','s') ;
% Call and run HAPPE pre-processing script
run(happeRun);
```
1. Enter HAPPE inputs
- See the above section for HAPPE user inputs for Oddball
At this point, no more user interaction is required for the script to do its job.
The HAPPE pipeline will run, and the remaining MATLAB code in the script file will evaluate the files outputted by HAPPE and move them to the appropriate locations based on this evaluation.
See [General Post-Happe Steps](#generalPost) for examples of the MATLAB code and an explanation of what it does.
## FishSharks (Pre-HAPPE)
1. Update all thresholds and filepaths in script file (must be done BEFORE running the script)
- In the fifth section of our script file, we set our "threshold" for the minimum number of trials that need to be retained after pre-processing for a subject's data to be eligible for PCA.
Additional thresholds can also be set for things like number of channels retained, but these are not currently in use.
- Note that setting these parameters occurs later in the FishSharks script than the one for Oddball;
this is because FishSharks files require additional processing before they are ready for HAPPE (see step 2 for details)
```
% Set quality threshold parameters
trialCutoff = 10;
```
- We also set environment variables with all of the filepaths that are relevant for managing HAPPE output files and tracking processed data.
The following paths should be checked and updated as necessary to reflect the organization of processing on your computer.
- `passPath` is the location you wish to have files that meet or exceed the above-defined thresholds to be saved
- `allPath` is the location you wish to have ALL files outputted from HAPPE saved to (regardless of whether threshold is met or not)
- `failPath` is the location you wish to have files that do not meet the above-defined thresholds to be copied to
- .mff files that do not meet threshold will be copied here as an indication that they should be processed manually to see if they meet threshold afterward
- `summPath` is the location you wish to save the file that lists all files processed through HAPPE in the current batch
- We currently use this to save the "processing log" to a location that all team members/computers have access to so it is easier to determine which files require processing when EEG data are not stored on a shared server
```
% Set paths for file sorting
passPath = 'V:\Processing-Repo\Folder Structure\3 - Files for PCA'; %location for .txt output files
allPath = 'V:\Processing-Repo\Folder Structure\2 - Processed Files'; %location for all processed files to end up
failPath = 'V:\Processing-Repo\Folder Structure\1b - Manual Processing'; %location to copy unsuccessful .mff to for manual process
% Set path for processing summary
summPath = 'Z:\Shared Server\Study Folder\Data Processing\6. MATLAB EEG Pipeline\Processed Data Logs';
```
1. Update raw (.mff) files' condition tags with accuracy information
- Unlike the Passive Oddball task, FishSharks trials can be either correct or incorrect.
Whether or not a trial was "responded to" correctly is relevant to the nature of the extracted ERP.
Because the event tags in our .mff files do not inherently contain information about whether each trial was responded to correctly, we need to add it ourselves.
This process can be done manually in NetStation, but it can also be automated using the MATLAB code detailed below.
- NOTE: This code requires eeglab.
Before running the code, open eeglab in your MATLAB session by typing `eeglab` into the console.
You can close it as soon as it opens, but this step ensures that eeglab is loaded into your current session and helps prevent the subsequent code from erroring out.
1. Set the filepaths for raw and updated .mff files
- This section is set to rely on user inputs.
Press "Run" in the MATLAB editor to start the processing.
- A message will appear in the console prompting you to enter two filepaths:
the first is the location of the raw .mff files, and the second is the location you would like the updated files to save in.
```
% User input for location of raw files
pathRaw = input('Enter the full path to the folder containing the raw files:\n> ','s');
% User input for destination of subsetted files
pathSub = input('Enter the full path to the folder in which to save the subsetted files:\n> ','s');
```
At this point, there will be no user input/actions necessary until all of the .mff files in the "pathRaw" directory have been updated and saved into the "pathSub" directory.
The code that asks the user for the path to HAPPE will run when that process has finished.
The following section will describe the code used to automate the process of updated .mff event tags to include accuracy information at the trial level.
Move on to Step 3 when the process has completed.
1. Gather and manage information from the directory housing the raw (.mff) files
```
% Have MATLAB gather a list of raw files housed in specified location (pathRaw)
dirInfo = struct2table(dir(pathRaw));
% Remove blank rows
noName = strcmp(dirInfo.name, '.') | strcmp(dirInfo.name, '..');
dirInfo(noName, :) = [];
```
1. Generate variables necessary for managing raw and updated files
- This code will generate the full filepaths (including file name) necessary for reading in and saving each .mff file and its updated counterpart
- It will also generate an ID variable for joining purposes based on the expected location of the subject name in the name of each file
```
% Generate ID variable
dataFiles = dirInfo(:, "name");
% Add ID variable to file data
dataFiles.ID = extractBefore(dataFiles.name, 8);
% Generate path to read raw data
rawPaths = dataFiles;
rawPaths.path = strcat({pathRaw}, "/", dirInfo.name);
rawPaths = rawPaths(:, ["ID", "path"]);
% Generate path to save updated versions of the data (containing accuracy info at trial level)
subPaths = dataFiles;
subPaths.path = strcat({pathSub}, "/", subPaths.ID, "_sub_fishshark.mff");
subPaths = subPaths(:, ["ID", "path"]);
% Join filepath datatables
mergePaths = join(rawPaths, subPaths, 'Keys', {'ID'})
```
1. Use a loop to update the event tags in each .mff file to reflect accuracy of response
- For every file included in the "mergePaths" dataset, MATLAB will perform the following actions:
- Set environment variables representing the path to read in the original .mff file and save its updated counterpart (updates with each iteration of the loop)
- Read in the .mff file
- Extract the "event" information from the .mff file
- Evaluate whether there is usable data in the file
- This step prevents the code from erroring out if a subject did not make it past the practice trials
- If the evaluation determines that there is NOT usable data in the present file, the loop will jump to the next file
- Create a table containing response information (response vs no response and reaction time) and condition information (go vs no-go) at the trial level
- Evaluate each trial to determine whether the response was correct or incorrect
- For No-Go trials, responses are CORRECT if there was no response (incorrect if subject did respond)
- For Go trials, responses are CORRECT if there was a response AND subject's reaction time was at least 200 ms (incorrect if subject did not respond OR if subject responded too quickly for response to be considered "valid")
- Update the event tags in the .mff file to contain a "c" for correct trials and an "x" for incorrect trials
- The c/x indicator will be appended to the front of the existing event tag (e.g., `Go++` will become `cGo++`)
- Export an updated version of the .mff file with accuracy information to the specified location ("pathSub")
```
for row = 1:height(mergePaths)
% Specify paths
rawFolder = mergePaths{row, "path_rawPaths"}
subFolder = mergePaths{row, "path_subPaths"}
% Read in EEG data
EEGraw = pop_mffimport(char(rawFolder), 'code')
% Create table from "event" field of raw data
EEGevent = struct2table(EEGraw.event)
% Check for the existence of usable rows
checkVars = strcmp(EEGevent.Properties.VariableNames, 'mffkey_cel')
% Skip files without necessary variables
if max(checkVars) == 0
continue
end
% Create table without practice/training trials
keepRows = strcmp(EEGevent.mffkey_cel, '4')
EEGsub = EEGevent(keepRows, :)
% Check for the existence of usable rows
checkRows = max(keepRows)
% Skip files with no usable rows
if checkRows == 0
continue
end
% Get response info at trial level
EEGresp = table(EEGsub.mffkey_obs, EEGsub.mffkey_eval, EEGsub.mffkey_rtim)
EEGresp = rmmissing(EEGresp)
EEGresp = renamevars(EEGresp, ["Var1", "Var2", "Var3"], ["Trial", "Eval", "RTime"])
% Get condition info at trial level
EEGconds = table(EEGsub.mffkey_obs, EEGsub.type)
EEGconds = renamevars(EEGconds, ["Var1", "Var2"], ["Trial", "Cond"])
keepConds = strcmp(EEGconds.Cond, 'Go++') | strcmp(EEGconds.Cond, 'NG++')
EEGcond = EEGconds(keepConds, :)
% Merge datasets
EEGtrials = join(EEGcond, EEGresp)
EEGtrials.RTime = cellfun(@str2num, EEGtrials.RTime)
% Evaluate trials for correct-ness of response
correct = strcmp(EEGtrials.Cond, 'Go++') & strcmp(EEGtrials.Eval, '1') & EEGtrials.RTime > 200 | strcmp(EEGtrials.Cond, 'NG++') & strcmp(EEGtrials.Eval, '0')
EEGtrials.Acc = correct
% Create new code tags including accuracy information
EEGtrials.newCode(EEGtrials.Acc & strcmp(EEGtrials.Cond, 'Go++')) = {'cGo++'}
EEGtrials.newCode(~EEGtrials.Acc & strcmp(EEGtrials.Cond, 'Go++')) = {'xGo++'}
EEGtrials.newCode(EEGtrials.Acc & strcmp(EEGtrials.Cond, 'NG++')) = {'cNG++'}
EEGtrials.newCode(~EEGtrials.Acc & strcmp(EEGtrials.Cond, 'NG++')) = {'xNG++'}
% Subset information for merge
EEGmerge = EEGtrials(:, {'Trial', 'Cond', 'newCode'})
% Prep key in original data
EEGevent.key = strcat(EEGevent.mffkey_obs, EEGevent.type)
% Prep key in merge data
EEGmerge.key = strcat(EEGmerge.Trial, EEGmerge.Cond)
EEGmerge = EEGmerge(:, {'key', 'newCode'})
% Merge new codes with event table
EEGnew = outerjoin(EEGevent, EEGmerge)
% Replace codes where new code is needed
EEGnew.code(~strcmp(EEGnew.newCode, '')) = EEGnew.newCode(~strcmp(EEGnew.newCode, ''))
EEGnew.type(~strcmp(EEGnew.newCode, '')) = EEGnew.newCode(~strcmp(EEGnew.newCode, ''))
EEGnew = table2struct(EEGnew(:, 1:28))
% Replace event table in original struct
EEGraw.event = EEGnew
% Export updated file
pop_mffexport(EEGraw, char(subFolder))
end
```
1. Run the HAPPE Pipeline
- This section is designed to rely on user input.
- When the prior steps have finsihed, a message will appear in the console prompting you to enter the filepath to the location of the HAPPE pre-processing file you wish to run
- Once the path is entered, MATLAB will run the file
```
% Set path to HAPPE pre-processing script
happeRun = input('Enter the full path to the HAPPE pre-processing file:\n> ','s') ;
% Call and run HAPPE pre-processing script
run(happeRun);
```
1. Enter HAPPE inputs
- See the above section for HAPPE user inputs for FishSharks
At this point, no more user interaction is required for the script to do its job.
The HAPPE pipeline will run, and the remaining MATLAB code in the script file will evaluate the files outputted by HAPPE and move them to the appropriate locations based on this evaluation.
See [General Post-Happe Steps](#generalPost) for examples of the MATLAB code and an explanation of what it does.
## Stop-Signal (Pre-HAPPE)
## General Post-HAPPE Steps {#generalPost}
1. Remove files that don't have any output data from the dataset that will be used to assess file quality
- This step is important because "empty" files don't play nicely with the code used to evaluate files that have some data in them (even if the data do not meet threshold)
- This code relies on HAPPE's quality data that remains in the MATLAB environment after the pipeline has finished.
```
% Create a list of files that received some kind of error message
noTags = any(strcmp(dataQC, 'NO_TAGS'), 2);
allRej = any(strcmp(dataQC, 'ALL_SEG_REJ'), 2);
error = any(strcmp(dataQC, 'ERROR'), 2);
loadFail = any(strcmp(dataQC, 'LOAD_FAIL'), 2);
% Combine filenames with quality data (for some reason, they are not automatically connected by HAPPE)
dataQCNew = [FileNames', dataQC];
% Remove all files in the above lists (those receiving errors) from the quality data
dataQCNew(noTags | allRej | error | loadFail, :) = [];
% Create list of variable names for quality data
dataQCnamesNew = ["File", dataQCnames];
% Save the data as a table for ease of use in subsequent steps
qcTable = cell2table(dataQCNew, 'VariableNames', dataQCnamesNew);
% Subset to ID and threshold information
testInfo = qcTable(:, ["idWave", "File", "Test"]);
```
2. Identify the files that meet (or don't meet) the threshold
```
% Create a list of files (i.e., rows in the table) that meet threshold
thresholdTest = qcTable.("Number_tgt+_Segs_Post-Seg_Rej") >= trialCutoff & qcTable.("Number_frq+_Segs_Post-Seg_Rej") >= trialCutoff;
% Add a variable to the quality data table that include whether or not the file meet threshold
qcTable.Test = thresholdTest;
```
1. Add an identifying variable to be used for data joining down the line
- This variable is generated using its expected location in the file name (i.e., how many text characters "in" it is)
```
% Generate IDs based on File variable
idWaveQC = extractBefore(qcTable.File, 8);
% Append ID variable to quality data
qcTable.idWave = idWaveQC;
```
1. Generate a list of files outputted by HAPPE
```
% Generate path for HAPPE pre-processing output (using the HAPPE environment variable from user's input of location of raw data for processing)
inputPath = strcat(srcDir, "\5 - processed");
% Read in list of files outputted from HAPPE
preprocessingOutput = dir(inputPath);
% Remove "empty" rows
preprocessingOutput = preprocessingOutput(~ismember({preprocessingOutput.name}, {'.', '..'}));
% Save data as a table for ease of later use
preprocessingOutput = struct2table(preprocessingOutput);
% Subset to file info
fileInfo = preprocessingOutput(:, ["name", "folder"]);
```
1. Select only desired files to be moved/copied
- Currently, we don't do anything with the "Individual Trial" files outputted by HAPPE.
These files are quite large and take a long time to move, so it is more efficient to just remove them from the data and not worry about moving them anywhere.
```
% Subset to desired files (AveOverTrial)
fileSubset = fileInfo(contains(fileInfo.name, "AveOverTrials"), :);
```
1. Add condition, ID, and threshold-related variables to the file data
```
% Generate list of IDs based on file name variable
idWaveFS = extractBefore(fileSubset.name, 8);
% Add ID list to file data
fileSubset.idWave = idWaveFS;
% Generate list of files belonging to each condition based on file name variable
target = contains(fileSubset.name, "tgt+");
frequent = contains(fileSubset.name, "frq+");
% Create empty variable for condition
fileSubset.cond = cell(size(fileSubset, 1), 1);
% Fill in condition variable based on the lists generated above
fileSubset.cond(target) = {'Target'};
fileSubset.cond(frequent) = {'Frequent'};
fileSubset.cond(~target & ~frequent) = {'All'};
% Join threshold test information
fileTest = join(fileSubset, testInfo);
```
1. Prepare data table with information about files that met the threshold
- The data generated here are preparing to copy the .txt files outputted by HAPPE into a folder containing all files that are suitable for PCA
```
% Create a separate table for only files that meet threshold
movingInfo = fileTest(fileTest.Test, :);
% Create empty columns for filepath variables
movingInfo.destination = cell(size(movingInfo, 1), 1);
movingInfo.origin = cell(size(movingInfo, 1), 1);
movingInfo.processedTo = cell(size(movingInfo, 1), 1);
movingInfo.processedFrom = cell(size(movingInfo, 1), 1);
% Generate file paths based on condition
movingInfo.destination = strcat({passPath}, "\", movingInfo.cond, "\", movingInfo.name);
movingInfo.origin = strcat(movingInfo.folder, "\", movingInfo.name);
movingInfo.processedTo = strcat({allPath}, "\", movingInfo.name);
movingInfo.processedFrom = strcat(movingInfo.folder, "\", movingInfo.name);
```
1. Prepare data table with information about files that do NOT meet the threshold
- The data generated here are preparing to copy .mff files from the location of the raw files into a folder indicating the need for manual processing
```
% Create a separate table for only files that did not meet threshold
failFiles = fileTest(~fileTest.Test, ["File", "folder", "name"]);
% Create empty columns for filepath variables
failFiles.destination = cell(size(failFiles, 1), 1);
failFiles.origin = cell(size(failFiles, 1), 1);
failFiles.processedTo = cell(size(failFiles, 1), 1);
failFiles.processedFrom = cell(size(failFiles, 1), 1);
% Generate filepaths based on ID and task
failFiles.destination = strcat({failPath}, "\", failFiles.File);
failFiles.origin = strcat({srcDir}, "\", failFiles.File);
failFiles.processedFrom = strcat(failFiles.folder, "\", failFiles.name);
failFiles.processedTo = strcat({allPath}, "\", failFiles.name);
```
1. Generate environment variables that correspond to the column index of relevant variables for file sorting
- Note that the very last line of this code defines the varaible(s) to exclude from the HAPPE outputted files.
This variable must be stripped from the data before saving them, because the presence of the extra variable makes the file incompatible with EP Toolkit's PCA process.
```
% Define column locations for each filepath variable
% For files that meet threshold:
toCol = find(strcmp(movingInfo.Properties.VariableNames, "destination"));
fromCol = find(strcmp(movingInfo.Properties.VariableNames, "origin"));
procColto = find(strcmp(movingInfo.Properties.VariableNames, "processedTo"));
procColfrom = find(strcmp(movingInfo.Properties.VariableNames, "processedFrom"));
% For files that do not meet threshold
rawCol = find(strcmp(failFiles.Properties.VariableNames, "origin"));
manCol = find(strcmp(failFiles.Properties.VariableNames, "destination"));
failProcColto = find(strcmp(failFiles.Properties.VariableNames, "processedTo"));
failProcColFrom = find(strcmp(failFiles.Properties.VariableNames, "processedFrom"));
% Define variable to exclude
extraVar = 'Time';
```
1. Use a loop to process all files that met threshold
- For each row in the "movingInfo" dataset, the loop will:
- Identify the origin and destination paths
- Read in the HAPPE output file
- Remove the extra variable
- Save the "cleaned" data in the appropriate folder (without variable names, as required by EP Toolkit)
```
for row = 1:height(movingInfo)
% Specify path info
pathFrom = movingInfo{row, fromCol};
pathTo = movingInfo{row, toCol};
% Read in the data
rawTable = readtable(pathFrom);
% Remove extra column (Time)
cleanTable = rawTable{:, ~strcmp(rawTable.Properties.VariableNames, extraVar)};
% Save without headers
writematrix(cleanTable, pathTo, 'Delimiter', '\t')
end
```
1. Use a loop to copy raw (.mff) files into a location that stores files requiring manual processing
```
for row = 1:height(failFiles)
% Specify path info
pathFrom = failFiles{row, rawCol};
pathTo = failFiles{row, manCol};
% Copy file
copyfile(pathFrom, pathTo)
end
```
1. Use a set of loops to copy all HAPPE output files into a folder intended to house all output (whether threshold is met or not)
```
for row = 1:height(movingInfo)
% Specify path info
pathFrom = movingInfo{row, procColfrom};
pathTo = movingInfo{row, procColto};
% Copy file
copyfile(pathFrom, pathTo);
end
for row = 1:height(failFiles)
% Specify path info
pathFrom = failFiles{row, failProcColFrom};
pathTo = failFiles{row, failProcColto};
% Copy file
copyfile(pathFrom, pathTo);
end
```
1. Generate a .txt file listing all processed .mff files
- This file will contain a list of all raw files (e.g., `1111_22_oddball.mff`) and save the list to the specified location ("summPath")
- The file will have the current date and time appended to the end so that it will be distinguishable from past logs
- The list of processed files is generated using an environment variable that the HAPPE pipeline creates that lists all files inputted to the pipeline
```
% Create a table from HAPPE FileNames cell array
processedList = cell2table(FileNames(:));
% Rename file variable from default
processedList = renamevars(processedList, {'Var1'}, {'File'});
% Save current date as a string variable
today = string(date());
% Save time as a string variable, replacing ":" with "_" so that file can be written
time = strrep(datestr(now, 'HH:MM:SS:FFF'), ':', "_");
% Generate file name to include current date and time
listFile = strcat("\oddballProcessed_", today, "_", time);
% Generate full path including file name
summPathFull = strcat(summPath, listFile);
% Write table to specified location
writetable(processedList, summPathFull);
```
# ERP PCA (EP) Toolkit
## Reading Text Files into EP Toolkit
1. Open MATLAB with "Run as Administrator"
1. Open ERP PCA Toolkit in MATLAB
- Type `ep` in command prompt
1. Click `Read` to import files
1. Use the following options
- Format = `text (.txt)`
- Type = `average`
- Mont = `Adult Hydrocel 128-channel 1.0`
1. Select `Single File Mode`.
- Single file mode will use the filename to assign the task condition and participant ID for each file.
- Note: `R` can rename files in batches
Thus, **it is critical to use a standard naming convention to name the files**.
For example, an oddball file could be named:
- `frq_1001_36` and `tgt_1001_36`
- A FishSharks file meanwhile could be named: `cgo_1001_36` and `cng_1001_36`
1. In the `Single File Mode` menu use the `Subject` field to denote which characters in the filename name will determine the participant ID.
- For the above example `1:7` would correspond to `1001_36`
1. Next, In the `Single File Mode` menu use the `Cell` field to denote which characters in the file name will determine the task condition.
- For the below example `41:43` would correspond to `frq` or `tgt`.
- For FishSharks files, it might be `47:49` that correspond to `cgo` or `cng`.
- ![image](images/readTextFile2024.png)
1. Select `Read` and select the `2_9AverageNet128` when prompted
1. The new file will have the participant ID and will combine the conditions for each participant.
- Subject Names:
- ![image](images/subjectCombinedNames.png)
- Task Conditions
- ![image](images/conditionCombinedNames.png)
## Update File with Experiment Information
1. Go to `Main` and click `Edit`
1. Click on the file you imported
1. In `Overview`, add the following information:
- Experiment Name: `Oddball`, `FishShark`, or `StopSignal`
- Reference Type: change to `average reference`
- Prestimulus period: change to `200`
- Nominal sampling rate: change to `1000`
1. Click `Done`
1. Go to `Main` and click `Save`
- Save the combined file as an `.ept` file in the `4-EPT Averages` folder using the following naming convention: "task_condition_age".
For example, if you were working on the target condition of oddball for all age groups, you would save the file as `ob_tgt_all`
## Generating Grand Average Waveforms
1. If the EPT average file (e.g., `ob_tgt_all`) is not already in the working environment, read it in using the steps below
- Go to `Read`
- Format = `EP (.ept)`
- Click `Read`
- Navigate to the `4 - EPT Averages` folder and select desired file(s)
- Click `Open` in the browser window to read the file(s)
- Click `Main` to return to main menu
1. Select `Edit`
1. When the editor window opens, navigate to the `Subjects` pane
1. Select `All` from among the many options along the lefthand pane of the editor
- This will select all of the subjects included in the file and assign them a weight of 1
1. Confirm that all subjects have been selected (look for a checked box in the subject row) and that all weights have been set to 1
1. Click `Add`
1. A new "subject" should have now been added to the bottom of the subjects list
- This subject is called `gave` and represents the grand average across all subjects
1. Click `Done` to exit the editor window, then `Main` to return to the EP Toolkit home
## Temporal PCA
1. Go to `Main` and click `PCA`
1. Input the following:
- Mode: `temporal`
- Rotation: `promax`
- Factors: `0`
- Title: tPCA_experimentname (example: `tPCA_ob_tgt_all`)
1. Click the appropriate file (e.g., `ob_tgt_all`)
1. Determine how many factors to retain using the scree plot (keep the number of factors where the blue line is above the red line)
1. Determine the percent variance accounted for by the number of factors retained by changing the "minimum % age accounted for criterion".
Record the number of factors retained and % variance accounted for by that number of factors.
1. Re-run the temporal PCA using the above inputs, **but change the number of factors to the number of factors retained from the above step**
1. Return to `Main` and click `Save`. Save the tPCA file in the `5-PCA` folder
## Spatial PCA
1. Go to `Main` and click `PCA`
1. Change the PCA type, using the following inputs:
- Mode: `spatial`
- Rotation: `infomax`
- Factors: `0`
- Title: sPCA_experimentname (e.g., `sPCA_ob_tgt_all`)
1. Click the appropriate file (e.g., `ob_tgt_all`)
1. Determine how many factors to retain using the scree plot (keep the number of factors where the blue line is above the red line)
1. Determine the percent variance accounted for by the number of factors retained by changing the "minimum % age accounted for criterion".
Record the number of factors retained and % variance accounted for by that number of factors.
1. Re-run the spatial PCA using the above inputs, **but change the number of factors to the number of factors retained from the above step**
1. Return to `Main` and click `Save`.
Save the sPCA file in the `5-PCA` folder
## Temporospatial PCA
1. Go to `Main` and click `PCA`
1. Change the PCA type, using the following inputs:
- Mode: `spatial`
- Rotation: `infomax`
- Factors: `0`
- Title: tsPCA_experimentname (e.g., `tsPCA_ob_tgt_all`)
1. Click the `tPCA` file (created in the previous step)
1. Determine how many factors to retain using the scree plot (keep the number of factors where the blue line is above the red line)
1. Determine the percent variance accounted for by the number of factors retained by changing the "minimum % age accounted for criterion".
Record the number of factors retained and % variance accounted for by that number of factors.
1. Re-run the spatial PCA using the above inputs, **but change the number of factors to the number of factors retained from the above step**
1. Return to `Main` and click `Save`.
Save the tsPCA file in the `5-PCA` folder.
## PCA Component Selection
Here, the goal is to select the PCA component that corresponds to the ERP component of interest, and the extraction that supports the intended interpretability of the component.
1. Go to `View` to begin the process of selecting the PCA component that corresponds to the ERP of interest.
- Iteratively select and view each temporospatial PCA component to identify the PCA component ("factor") that corresponds to the ERP of interest (e.g., N2 or P3).
Select the temporospatial PCA component that corresponds to the ERP of interest based on the timing, spatial location, morphology, and (as relevant) any condition- or age-related differences of the component based on prior work.
1. Generate tsPCA components.
Go to `Window` and input the following:
- select the tsPCA file
- select among `mean`, `maxPeak`, or other options.
(According to Joe Dien), when using amplitudes from PCA components, it does not matter which option you select—all the different methods result in comparable *p*-values when dealing with PCA components (it would be good to verify this).
So, select a method that makes sense for the story you want to tell.
The methods will yield different results when dealing with the raw waveforms.
- select `AutoPCA` or `Window` to select channels.
If the peak amplitude is where you expect temporally and spatially, then use the autoPCA function, and if it is not, then window to where you expect it to be.
This will allow you to report results that are more interpretable.
As Joe Dien described, the way that PCA data are stored internally in the toolkit are as factor scores (i.e., component scores).
When you extract amplitudes from a PCA component, you are extracting the factor scores multiplied by a constant (some scaling factor, representing the electrode where you extract it from).
Thus, according to Joe Dien, the *p*-values should be the same regardless of whether you use AutoPCA, or extract from a single electrode or multiple electrodes (it would be good to verify this).
What is changing is merely the scaling factor (i.e., the constant that is multiplied by all factor scores).
When you select multiple electrodes, it is computing the PCA-estimated amplitude at each electrode and performing a simple average across those electrodes.
The AutoPCA extracts the PCA-estimated amplitude at the peak channel and the peak timepoint.
If the waveform is negative-going at the peak channel, and you are interested in the positive-going dipole, you would select the peak positive channel to identify the PCA-estimated amplitude of the positive-going waveform on that PCA component.
Nevertheless, even though you are selecting the PCA-estimated amplitude for a given channel at a given electrode, there are now "virtual channels"; the estimates include the contributions of all channels and all timepoints <u>to the extent that</u> they load onto the PCA component of interest.
Thus, even if you select to window a PCA component from only 1 channel at 1 timepoint, it is using ALL channels and timepoints in the estimation—this is not the case if windowing the raw ERP waveforms.
- Save the files generated from the AutoPCA in the `6-PCA Components` folder using the following naming convention: "task_condition_age" (e.g.,`ob_tgt_all`).
1. To view all of the tsPCA components, click `View` and input the following
- select the appropriate file (e.g., `ob_tgt_all`)
- select `gave`
- select `none`
- click `Waves`
1. It is good practice to check to make sure that components are comparable across different age ranges
- You can check this in one of two ways:
- Visually examine grand averages between age ranges
- Apply the PCA from one age group and apply it to another age group and examine whether the results hold up using cross-validation in EPToolkit
## Identifying Electrodes that Load Onto PCA Component
1. Go to `Window`
1. Select the PCA file of interest (e.g., `tsPCA-ob_tgt_all`)
1. Click the `Channels` button (about halfway down the `Window` window)
1. Click `Factor`
- From the dropdown, select the PCA file of interest (e.g., `tsPCA_ob_tgt_all)
- Enter the threshold in the space below (e.g., 0.5)
- This sets the minumum factor loading value for an electrode to be "included" in the component-related cluster
- Depending on whenther you are interested in positive or negative factor loadings, select the appropriate sign (`+`, `-`, or `+/-`)
- A popup window with PCA factors will appear.
Select the component(s) you wish to identify spatially (e.g., `TF01SF01`)
1. When prompted, give the electrode cluster a name
1. The channels that change color are those which load onto the selected component at or above the threshold value
## Exporting Grand Average Data
If you are only interested in the grand average data and not individual subjects, these instructions will allow you to export a .txt file containing only the grand average data.
1. From EP Toolkit home (`Main` screen), select `Edit`
1. Select the .ept averages file (e.g., `ob_tgt_all`) that contains a "subject" representing the grand average
- If the file does NOT contain a grand average subject, follow the steps in the above section to generate it
1. Rename the file
- For example, `ob_tgt_all` could be renamed to `ob_tgt_gav`
- Renaming the file will prompt EP Toolkit to ask whether you want to generate a new file with this new name, or overwrite the existing datafile once your changes are complete
1. Select `Subjects` from the options at the top of the editor window
1. Click `All` from among the options on the lefthand side of the `Subjects` window
- This will select all of the subjects
1. Scroll to the bottom of the list of subjects and **deselect** the subject labeled `grand average`
- Essentially, the goal here is to create a dataset that includes ONLY the grand average information, rather than each individual subject
1. Once everything EXCEPT for the grand average subject is selected, click `Delete` on the lefthand side of the editor window
- This will remove the individual subject data from the dataset and leave the grand average information
1. Click `Done`
1. If you renamed the datafile, EP Toolkit should generate a popup message asking whether you would like to rename your dataset OR generate a new dataset using the new name (leaving the original dataset untouched).
From the options presented, clikc `New` to generate a new file and preserve the original
1. The editor window should close, returning you to the EP Toolkit pane that asks you to select a dataset to edit.
From here, click `Main` to return to EP Toolkit "home"
1. Once in the main window, clikc `Save`
- Set the save format to `Text (.txt)`
- Click the grand average data (e.g., `ob_tgt_gav`) to save it
- A file explorer window should open, prompting you to select the appropriate save location and give your file a name
# Visualizations in R
## R Code for Grand Average Waveform Plot
1. Read in the grand average waveform data exported from EP Toolkit.
- We currently process the conditions within a given task separately, so each condition should have its own grand average file.
```
obTgt <- read.table("V:/SRS-ERP-Oddball/Hard/All/4 - EPT Averages/2024-11-05/gave/ob_tgt_gav.txt")
obFrq <- read.table("V:/SRS-ERP-Oddball/Hard/All/4 - EPT Averages/2024-11-05/gave/ob_frq_gav.txt")
```
1. Create a subset of data that only includes those electrodes that are part of the clusters identified in EP Toolkit.
- The grand average data does not have row or column labels, but the columns represent the EEG net channels in numerical order (1-129).
We can therefore use their column index values to select the desired electrodes; so, the list containing the channel numbers should include ONLY numbers.
The code that selects these channels out of the full dataset will rely on numerical input.
```
# Set electrode clusters
obElectrodes <- c(58, 59, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 81, 82, 83, 84, 89, 80, 91, 95, 96, 101)
# Subset to desired electrodes
obTgt_sub <- obTgt[, obElectrodes]
obFrq_sub <- obFrq[, obElectrodes]
```
1. Compute averages and create labels for conditions
- Once the data have been subsetted down to include only the electrode channels of interest, all that remains is to compute the average amplitude across all of those channels
- Adding a condition label will allow us to combine the two condition-specific datasets into one that can be used for visualizations
- For ease of plotting, name the conditions the way that you would like them to appear on the figure (i.e., "Target" instead of "tgt")
```
# Compute averages
obTgt_sub$amplitude <- rowMeans(obTgt_sub)
obFrq_sub$amplitude <- rowMeans(obFrq_sub)
# Remove raw values and add condition labels
obTgt_amps <- obTgt_sub %>% select(amplitude) %>% mutate(condition = "Target")
obFrq_amps <- obFrq_sub %>% select(amplitude) %>% mutate(condition = "Frequent")
```
1. Add timing-related information to the data
- EP Toolkit exports ERP data without timestamps, but arranges it in order of timing
- We can create a template with the appropriate timestamps and append this column to the amplitude data
```
# Create template
erpTemplate <- data.frame(
time = -199:1000
)
# Merge template with amplitude data
obTgtTimes <- cbind(erpTemplate, obTgt_amps)
obFrqTimes <- cbind(erpTemplate, obFrq_amps)
```
1. Combine all conditions into a single data object to be used for plotting
```
oddball <- rbind(obTgtTimes, obFrqTimes) %>%
select(time, condition, amplitude) %>%
arrange(time)
```
1. Generate the waveform figures
```
ggplot(
data = oddball,
aes(
x = time,
y = amplitude,
group = condition,
color = condition
)
) +
geom_line(linewidth = 1.5) +
scale_x_continuous(
name = "Time Relative to Stimulus Onset (ms)",
limits = c(-200, 1000),
breaks = seq(from = -200, to = 1000, by = 200)) +
scale_y_continuous(
name = "Voltage (microvolts)",
limits = c(-4, 10),
breaks = seq(from = -10, to = 15, by = 2)) +
scale_color_viridis_d()+
theme_classic(base_size = 18) +
theme(
legend.position = c(.7, .9),
legend.title = element_blank())
```
# Appendix
## Troubleshooting
1. Running out of space on EP Toolkit?
You can navigate to your folder (maybe under Documents/MATLAB/EPwork) and delete everything except for EPprefs to refresh your workspace.
NOTE: This will delete everything stored in EP Toolkit, so remember to back up files that you need to save.
1. If you get an error or a warning from Git when trying to commit and/or push ERP files up to the repo, you may need to initialize Git LFS.
The instructions for initializing Git LFS can be found [here](https://devpsylab.github.io/DataAnalysis/git.html#gitLfs).
- The filetypes that commonly cause such errors in our current ERP processing pipeline are:
- `.txt` files
- `.ept` files
- `.set` files
## To-do
- Better describe the missingness for files
- We need a systematic way to identify new ways to process the missingness
- Find a way to best describe and report the ways of missingness
- Go through the maxmem edits on the clean_rawData question.
We want a standardized value on the machines
- Look at the warning messages for the automatic script updates
- automatic cleaning of files problems
- Integrate ERPLAB with our existing EEGLab Functions including:
- Adding an event list:
- Currently, some code for this is updated in the script on the lab drive
- Documentation is [here](https://github.com/lucklab/erplab/wiki/Creating-an-EventList:-ERPLAB-Functions:-Tutorial)
- Figure out how to average epochs and export to the EP Toolkit
- Evaluate the semi-automated pipelines from:
- [Debnath et al. (2020)](https://onlinelibrary.wiley.com/doi/full/10.1111/psyp.13580)
- [Desjardins et al. (2021)](https://www.sciencedirect.com/science/article/pii/S0165027020303848)
- [Flo et al. (2022)](https://www.sciencedirect.com/science/article/pii/S1878929322000214)
- [Gabar-Durnam et al. (2018)](https://www.frontiersin.org/articles/10.3389/fnins.2018.00097/full)