|
| 1 | +Movies 3T tutorial |
| 2 | +================== |
| 3 | + |
| 4 | +This tutorial describes how to perform voxelwise modeling on a visual |
| 5 | +imaging experiment. |
| 6 | + |
| 7 | +**Data set:** |
| 8 | +This tutorial is based on publicly available data |
| 9 | +`published on CRCNS <TBD>`_ [4]_. |
| 10 | +The data is briefly described in the dataset `description PDF <TBD>`_, |
| 11 | +and in more details in the original publication [1]_. |
| 12 | +If you publish work using this data set, please cite the original |
| 13 | +publication [1]_, and the CRCNS data set [4]_. |
| 14 | + |
| 15 | +**Models:** |
| 16 | +This tutorial implements different voxelwise models: |
| 17 | + |
| 18 | +- a ridge model with wordnet semantic features as described in [1]_. |
| 19 | +- a ridge model with motion-energy features as described in [2]_. |
| 20 | +- a banded-ridge model with both feature spaces as described in [3]_. |
| 21 | + |
| 22 | +**Scikit-learn API:** |
| 23 | +These tutorials use ``scikit-learn`` to define the preprocessing steps, the |
| 24 | +modeling pipeline, and the cross-validation scheme. If you are not familiar |
| 25 | +with scikit-learn API, we recommend the `getting started guide |
| 26 | +<https://scikit-learn.org/stable/getting_started.html>`_. We also use a lot of |
| 27 | +the scikit-learn terminology, which is explained in great details in the |
| 28 | +`glossary of common terms and API elements |
| 29 | +<https://scikit-learn.org/stable/glossary.html#glossary>`_. |
| 30 | + |
| 31 | +**Running time:** |
| 32 | +Most of these tutorials can be run in a very reasonable time (under 1 minute |
| 33 | +for most examples, ~7 minutes for the banded ridge example) with a GPU backend |
| 34 | +in `himalaya <https://github.com/gallantlab/himalaya>`_. Using a CPU backend is |
| 35 | +usually slower (typically 10 times slower). |
| 36 | + |
| 37 | +**Requirements:** |
| 38 | +This tutorial requires the following Python packages: |
| 39 | + |
| 40 | +- voxelwise_tutorials (this repository) and its dependencies |
| 41 | +- cupy or pytorch (optional, to use a GPU backend in himalaya) |
| 42 | + |
| 43 | +**References:** |
| 44 | + |
| 45 | +.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). |
| 46 | + A continuous semantic space describes the representation of thousands of |
| 47 | + object and action categories across the human brain. Neuron, 76(6), |
| 48 | + 1210-1224. |
| 49 | +
|
| 50 | +.. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, |
| 51 | + B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain |
| 52 | + activity evoked by natural movies. Current Biology, 21(19), 1641-1646. |
| 53 | +
|
| 54 | +.. [3] Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019). |
| 55 | + Voxelwise encoding models with non-spherical multivariate normal priors. |
| 56 | + Neuroimage, 197, 482-492. |
| 57 | +
|
| 58 | +.. [4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020): |
| 59 | + Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org. |
| 60 | + http://dx.doi.org/10.6080/TBD |
0 commit comments