This is a portal to public datasets released by Kamitani lab, Kyoto Univ and ATR.
Any questions or inqueries are welcomed at issues in this repository or kamitanilab@gmail.com.
- Paper: Horikawa & Kamitani (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications.
- Data
- Code: GitHub
- Questions and answers
- Paper: Shen, Horikawa, Majima, and Kamitani (2019) Deep image reconstruction from human brain activity. PLOS Computational Biology.
- Data
- Code: GitHub
- Questions and answers
- Paper: Shen, Dwivedi, Majima, Horikawa, and Kamitani (2019) End-to-End Deep Image Reconstruction From Human Brain Activity. Frontiers in Computational Neuroscience.
- Data
- Trained networks: figshare
- Code: GitHub
- Paper: Horikawa & Kamitani (2022) Attention modulates neural representation to render reconstructions according to subjective appearance. Communications Biology.
- Data
- Code: figshare
- Paper: Abdelhack and Kamitani (2018) Sharpening of Hierarchical Visual Feature Representations of Blurred Images. eNeuro.
- Data
- Raw fMRI data: OpenNeuro
- Preprocessed fMRI and image features: Brainliner
- Code: GitHub
- Paper: Abdelhack and Kamitani (2019) Conflicting Bottom-up and Top-down Signals during Misrecognition of Visual Object. bioRxiv.
- Data: figshare
- Code: GitHub
- Paper: Ho, Horikawa, Majima, and Kamitani (2022) Inter-individual deep image reconstruction. bioRxiv.
- Code: GitHub
- Paper: Horikawa, Tamaki, Miyawaki, and Kamitani (2013) Neural Decoding of Visual Imagery During Sleep. Science.
- Data: Brainliner
- Code: GitHub
The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions
- Paper: Horikawa, Cowen, Keltner, and Kamitani (2020) The Neural Representation of Visually Evoked Emotion Is High-Dimensional, Categorical, and Distributed across Transmodal Brain Regions. iScience.
- Data
- Code: GitHub