Decoding Nature Images from EEG for Object Recognition [ICLR2024]
Core idea: basic constrastive learning for image and EEG. Interesting analysis from neuroscience perspective! 🤣
p.s. We trained the base framework (NICE), that with self-attention (NICE-SA), and with graph attention (NICE-GA) five times each for the averaged results in Table 2&3.
- Propose a self-supervised framework for EEG-based object recognition with contrastive learning, achieving remarkable zero-shot performance on large and rich datasets.
- Demonstrate the feasibility of investigating image information from EEG signals, by resolving brain activity from temporal, spatial, spectral, and semantic aspects.
- Apply two plug-and-play modules to capture spatial correlations among EEG channels, offering evidence that the model discerns the spatial dynamics of object recognition.
many thanks for sharing good datasets!
- Things-EEG2
- Things-MEG (updating)
./preprocessing/
- raw data:
./Data/Things-EEG2/Raw_data/
- proprocessed eeg data:
./Data/Things-EEG2/Preprocessed_data_250Hz/
-
pre-processing EEG data of each subject
- modify
preprocessing_utils.py
as you need.- choose channels
- epoching
- baseline correction
- resample to 250 Hz
- sort by condition
- Multivariate Noise Normalization (z-socre is also ok)
python preprocessing.py
for each subject.
- modify
-
get the center images of each test condition (for testing, contrast with EEG features)
- get images from original Things dataset but discard the images used in EEG test sessions.
Now we release the image features extracted with CLIP model in ./dnn_feature/
.
./dnn_feature_extraction/
- raw image:
./Data/Things-EEG2/Image_set/image_set/
- preprocessed eeg data:
./Data/Things-EEG2/Preprocessed_data/
- features of each images:
./Data/Things-EEG2/DNN_feature_maps/full_feature_maps/model/pretrained-True/
- features been packaged:
./Data/Things-EEG2/DNN_feature_maps/pca_feature_maps/model/pretrained-True/
- features of condition centers:
./Data/Things-EEG2/Image_set/
- obtain feature maps with each pre-trained model with
obtain_feature_maps_xxx.py
(clip, vit, resnet...) - package all the feature maps into one .npy file with
feature_maps_xxx.py
- obtain feature maps of center images with
center_fea_xxx.py
- save feature maps of each center image into
center_all_image_xxx.npy
- save feature maps of each condition into
center_xxx.npy
(used in training)
- save feature maps of each center image into
./nice_stand.py
./visualization/
Hope this code is helpful. I would appreciate you citing us in your paper. 😊
@inproceedings{song2024decoding,
title = {Decoding {{Natural Images}} from {{EEG}} for {{Object Recognition}}},
author = {Song, Yonghao and Liu, Bingchuan and Li, Xiang and Shi, Nanlin and Wang, Yijun and Gao, Xiaorong},
booktitle = {International {{Conference}} on {{Learning Representations}}},
year = {2024},
}