Source code for research article:
"Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception"
by Máté Aller, Heidi Solberg Økland, Lucy J. MacGregor, Helen Blank, and Matthew H. Davis
Published in Journal of Neuroscience
Cite: Aller, M., Solberg Økland, H., MacGregor, L. J., Blank, H., & Davis, M. H. (2022). Differential auditory and visual phase-locking are observed during audio-visual benefit and silent lip-reading for speech perception. Journal of Neuroscience. https://doi.org/10.1523/JNEUROSCI.2476-21.2022
- Clone or download this repository
- Download all dependencies into their individual subfolders within audiovisual_speech_MEG/code/toolbox/
- Download the data folder from here into the project root folder (audiovisual_speech_MEG/)
- Build conda environment for python by executing:
conda env create -f environment.yml
- Navigate to the project root folder (audiovisual_speech_MEG/) and edit AVSM_setupdir.m adding the path to the project root folder on your computer. Then execute AVSM_setupdir. This takes care of setting up the environment.
- Execute the matlab scripts found in results/ to reproduce behavioural and sensor space MEG results.
- All analysis code can be found in audiovisual_speech_MEG/code/analysis/
- Setup environment by executing the commands below
cd path/to/project/folder conda activate avsm setenv MESA_GL_VERSION_OVERRIDE 3.3 setenv PYTHONPATH `pwd`
- Update the project_dir variable in audiovisual_speech_MEG/code/analysis/megcoherence_utils.py to the project folder path on your system.
- To reproduce the source-space MEG results run the audiovisual_speech_MEG/results/megcoherence_anatomical_roi_analysis.ipynb notebook in jupyter notebook.
- Source analysis pipeline can be found at audiovisual_speech_MEG/code/analysis/megcoherence_source_analysis_pipeline.txt. To reproduce source coherence maps run the corresponding line of code from the text file in an Ipython console.
All python dependencies are listed in audiovisual_speech_MEG/environment.yml
The MEG analysis code was written by Máté Aller. Preprocessing scripts (MEG, behavioural) were written by Heidi Solberg Økland.