Skip to content

Commit 226257e

Browse files
committed
ENH add ridge regression tutorial
1 parent 0a8b659 commit 226257e

21 files changed

Lines changed: 1739 additions & 179 deletions

doc/Makefile

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,10 +39,11 @@ merge-notebooks:
3939
python merge_notebooks.py \
4040
$(NBDIR)/00_setup_colab.ipynb \
4141
$(NBDIR)/01_plot_explainable_variance.ipynb \
42-
$(NBDIR)/02_plot_wordnet_model.ipynb \
43-
$(NBDIR)/03_plot_hemodynamic_response.ipynb \
44-
$(NBDIR)/04_plot_motion_energy_model.ipynb \
45-
$(NBDIR)/05_plot_banded_ridge_model.ipynb \
42+
$(NBDIR)/02_plot_ridge_regression.ipynb \
43+
$(NBDIR)/03_plot_wordnet_model.ipynb \
44+
$(NBDIR)/04_plot_hemodynamic_response.ipynb \
45+
$(NBDIR)/05_plot_motion_energy_model.ipynb \
46+
$(NBDIR)/06_plot_banded_ridge_model.ipynb \
4647
> $(NBDIR)/merged_for_colab.ipynb
4748
echo "Saved in $(NBDIR)/merged_for_colab.ipynb"
4849

doc/index.rst

Lines changed: 18 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -15,42 +15,38 @@ To explore these tutorials, one can:
1515

1616
- read the rendered examples in the tutorials
1717
`gallery of examples <_auto_examples/index.html>`_ (recommended)
18-
- run the Python scripts located in the GitHub repository (`tutorials <https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials>`_ directory)
19-
- run the Jupyter notebooks located in the GitHub repository
20-
(`tutorials/notebooks
18+
- run the Python scripts (`tutorials <https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials>`_ directory)
19+
- run the Jupyter notebooks (`tutorials/notebooks
2120
<https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials/notebooks>`_
2221
directory)
2322
- run the merged notebook in
24-
`Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/movies/merged_for_colab.ipynb>`_.
23+
`Google Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/movies/merged_for_colab.ipynb>`_
2524

26-
The tutorials are best explored in order, starting with the "Movies" tutorial.
25+
The tutorials are best explored in order, starting with the `Movies tutorial
26+
<_auto_examples/index.html>`_.
2727

2828
The project is available on GitHub at `gallantlab/voxelwise_tutorials
29-
<https://github.com/gallantlab/voxelwise_tutorials>`_. On top of the tutorials,
30-
the GitHub repository contains a Python package called ``voxelwise_tutorials``,
31-
which contains useful functions to download the data sets, load the files,
32-
process the data, and visualize the results. Install instructions are available
33-
`here <voxelwise_package.html>`_. Then, run either the Python scripts or the
34-
Jupyter notebooks located in the "tutorials" directory.
35-
36-
Tutorials
37-
---------
29+
<https://github.com/gallantlab/voxelwise_tutorials>`_. On top of the tutorials
30+
scripts, the GitHub repository contains a Python package called
31+
``voxelwise_tutorials``, which contains useful functions to download the data
32+
sets, load the files, process the data, and visualize the results. Install
33+
instructions are available `here <voxelwise_package.html>`_.
34+
35+
Navigation
36+
----------
3837
.. toctree::
3938
:includehidden:
40-
:maxdepth: 2
39+
:maxdepth: 1
4140

4241

4342
_auto_examples/index
4443

45-
Documentation
46-
-------------
4744
.. toctree::
48-
:maxdepth: 2
45+
:maxdepth: 1
4946

50-
voxelwise_modeling
47+
voxelwise_package
5148

5249
.. toctree::
53-
:maxdepth: 2
54-
55-
voxelwise_package
50+
:maxdepth: 1
5651

52+
voxelwise_modeling

doc/voxelwise_modeling.rst

Lines changed: 86 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
1-
The voxelwise modeling framework
2-
================================
1+
References
2+
==========
33

4-
VM Framework
5-
------------
4+
Voxelwise modeling framework
5+
----------------------------
66

77
Voxelwise modeling (VM) is a framework to perform functional magnetic resonance
8-
imaging (fMRI) data analysis.
9-
Over the years, VM has led to many high profile publications
10-
[1]_ [2]_ [3]_ [4]_ [5]_ [6]_ [7]_ [8]_ [9]_ [10]_ [11]_.
8+
imaging (fMRI) data analysis. Over the years, VM has led to many high profile
9+
publications :ref:`[1]<kay2008>` :ref:`[2]<nas2009>` :ref:`[3]<nis2011>`
10+
:ref:`[4]<hut2012>` :ref:`[5]<cuk2013>` :ref:`[6]<cuk2013b>`
11+
:ref:`[7]<sta2013>` :ref:`[8]<hut2016>` :ref:`[9]<deh2017>`
12+
:ref:`[10]<les2019>` :ref:`[11]<den2019>` :ref:`[12]<nun2019>`.
1113

1214
[...]
1315

@@ -18,86 +20,129 @@ VM provides multiple critical improvements over other approaches to fMRI data
1820
analysis:
1921

2022
#.
21-
Most methods for analyzing fMRI data rely on simple contrasts
22-
between a small number of conditions. In contrast, VM can efficiently analyze
23-
many different stimulus and task features simultaneously. This framework
24-
enables the analysis of complex naturalistic stimuli and tasks which contain
25-
a large number of features; for example, VM has been used with naturalistic images
26-
[1]_ [2]_, movies [3]_, and stories [8]_.
23+
Most methods for analyzing fMRI data rely on simple contrasts between a
24+
small number of conditions. In contrast, VM can efficiently analyze many
25+
different stimulus and task features simultaneously. This framework enables
26+
the analysis of complex naturalistic stimuli and tasks which contain a
27+
large number of features; for example, VM has been used with naturalistic
28+
images :ref:`[1]<kay2008>` :ref:`[2]<nas2009>`, movies :ref:`[3]<nis2011>`,
29+
and stories :ref:`[8]<hut2016>`.
2730

2831
#.
2932
Unlike the traditional null hypothesis testing framework, VM is not prone
30-
to overfitting and type I error and generalizes to new subjects and stimuli .
31-
VM is a predictive modeling framework that
32-
evaluates model performance on a separate test data set not used during fitting.
33+
to overfitting and type I error and generalizes to new subjects and stimuli
34+
. VM is a predictive modeling framework that evaluates model performance on
35+
a separate test data set not used during fitting.
3336

3437
#.
35-
VM performs an analysis in each subjects native brain space instead of lossily
36-
transforming subjects into a common group space. This allows VM to produce
37-
results with maximal spatial resolution. Each subject provides their own fit
38-
and test data, so every subject provides a complete replication of all
39-
hypothesis tests.
38+
VM performs an analysis in each subject's native brain space instead of
39+
lossily transforming subjects into a common group space. This allows VM to
40+
produce results with maximal spatial resolution. Each subject provides
41+
their own fit and test data, so every subject provides a complete
42+
replication of all hypothesis tests.
4043

4144
#.
42-
VM produces high-dimensional functional maps rather than simple contrast
43-
maps or correlation matrices. These maps reflect the
44-
selectivity of each voxel to thousands of stimulus and task features spread
45-
across dozens of feature spaces. These functional maps are much more
46-
detailed than those produced using statistical parametric mapping (SPM),
47-
multivariate pattern analysis (MVPA), or representational similarity
48-
analysis (RSA).
45+
VM produces high-dimensional functional maps rather than simple contrast
46+
maps or correlation matrices. These maps reflect the selectivity of each
47+
voxel to thousands of stimulus and task features spread across dozens of
48+
feature spaces. These functional maps are much more detailed than those
49+
produced using statistical parametric mapping (SPM), multivariate pattern
50+
analysis (MVPA), or representational similarity analysis (RSA).
4951

5052
#.
5153
VM recovers stable and interpretable functional parcellations, which
52-
respect individual variability in anatomy [8]_.
54+
respect individual variability in anatomy :ref:`[8]<hut2016>`.
5355

5456

5557
References
5658
----------
5759

58-
.. [1] Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008).
60+
.. _kay2008:
61+
62+
[1] Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008).
5963
Identifying natural images from human brain activity.
6064
Nature, 452(7185), 352-355.
6165

62-
.. [2] Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009).
66+
.. _nas2009:
67+
68+
[2] Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009).
6369
Bayesian reconstruction of natural images from human brain activity.
6470
Neuron, 63(6), 902-915.
6571

66-
.. [3] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011).
72+
.. _nis2011:
73+
74+
[3] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011).
6775
Reconstructing visual experiences from brain activity evoked by natural movies.
6876
Current Biology, 21(19), 1641-1646.
6977

70-
.. [4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
78+
.. _hut2012:
79+
80+
[4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
7181
A continuous semantic space describes the representation of thousands of
7282
object and action categories across the human brain.
7383
Neuron, 76(6), 1210-1224.
7484

75-
.. [5] Çukur, T., Nishimoto, S., Huth, A. G., & Gallant, J. L. (2013).
85+
.. _cuk2013:
86+
87+
[5] Çukur, T., Nishimoto, S., Huth, A. G., & Gallant, J. L. (2013).
7688
Attention during natural vision warps semantic representation across the human brain.
7789
Nature neuroscience, 16(6), 763-770.
7890

79-
.. [6] Çukur, T., Huth, A. G., Nishimoto, S., & Gallant, J. L. (2013).
91+
.. _cuk2013b:
92+
93+
[6] Çukur, T., Huth, A. G., Nishimoto, S., & Gallant, J. L. (2013).
8094
Functional subdomains within human FFA.
8195
Journal of Neuroscience, 33(42), 16748-16766.
8296

83-
.. [7] Stansbury, D. E., Naselaris, T., & Gallant, J. L. (2013).
97+
.. _sta2013:
98+
99+
[7] Stansbury, D. E., Naselaris, T., & Gallant, J. L. (2013).
84100
Natural scene statistics account for the representation of scene categories
85101
in human visual cortex.
86102
Neuron, 79(5), 1025-1034
87103

88-
.. [8] Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016).
104+
.. _hut2016:
105+
106+
[8] Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016).
89107
Natural speech reveals the semantic maps that tile human cerebral cortex.
90108
Nature, 532(7600), 453-458.
91109

92-
.. [9] de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017).
110+
.. _deh2017:
111+
112+
[9] de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017).
93113
The hierarchical cortical organization of human speech processing.
94114
Journal of Neuroscience, 37(27), 6539-6557.
95115

96-
.. [10] Lescroart, M. D., & Gallant, J. L. (2019).
116+
.. _les2019:
117+
118+
[10] Lescroart, M. D., & Gallant, J. L. (2019).
97119
Human scene-selective areas represent 3D configurations of surfaces.
98120
Neuron, 101(1), 178-192.
99121

100-
.. [11] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
122+
.. _den2019:
123+
124+
[11] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
101125
The representation of semantic information across human cerebral cortex
102126
during listening versus reading is invariant to stimulus modality.
103-
Journal of Neuroscience, 39(39), 7722-7736.
127+
Journal of Neuroscience, 39(39), 7722-7736.
128+
129+
.. _nun2019:
130+
131+
[12] Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
132+
Voxelwise encoding models with non-spherical multivariate normal priors.
133+
Neuroimage, 197, 482-492.
134+
135+
Datasets
136+
--------
137+
.. _nis2011data:
138+
139+
[3b] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
140+
B., & Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data.
141+
CRCNS.org. http://dx.doi.org/10.6080/K00Z715X
142+
143+
.. _hut2012data:
144+
145+
[4b] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):
146+
Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.
147+
http://dx.doi.org/10.6080/TBD
148+

0 commit comments

Comments
 (0)