Skip to content

Commit 4036b8e

Browse files
committed
DOC add documentation on the VM framework
1 parent 7798495 commit 4036b8e

6 files changed

Lines changed: 132 additions & 5 deletions

File tree

README.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ modeling, based for instance on visual imaging experiments.
1515
The best way to explore these tutorials is to go to the
1616
`website <https://gallantlab.github.io/tutorials/>`_.
1717

18-
Voxelwise package
19-
=================
18+
The ``voxelwise`` package
19+
=========================
2020

2121
On top of tutorials, this repository also contains a small Python package
2222
called ``voxelwise``, which contains useful fonctions to download the data sets,

doc/index.rst

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,23 @@ Voxelwise modeling tutorials
44

55
Welcome to the voxelwise modeling tutorial from the Gallant lab.
66

7+
Getting started
8+
---------------
9+
10+
This website contains tutorials describing how to use the
11+
`voxelwise modeling framework <voxelwise_modeling.html>`_.
12+
13+
The tutorials consist of Python scripts, which are rendered in a
14+
`gallery of examples <auto_examples/index.html>`_.
15+
The tutorials are best explored in order, starting with the
16+
"Movies 3T tutorial".
17+
18+
To run the tutorials yourself, we recommend to download the project on
19+
github at `gallantlab/tutorials <https://github.com/gallantlab/tutorials>`_.
20+
On top of the tutorials, the github repository contains a Python package
21+
called ``voxelwise``, which contains useful fonctions to download the data
22+
sets, load the files, process the data, and visualize the results.
23+
Install instructions are available `here <voxelwise_package.html>`_.
724

825
Tutorials
926
---------
@@ -14,10 +31,15 @@ Tutorials
1431

1532
auto_examples/index
1633

17-
Voxelwise package
18-
-----------------
34+
Documentation
35+
-------------
36+
.. toctree::
37+
:maxdepth: 2
38+
39+
voxelwise_modeling
40+
1941
.. toctree::
2042
:maxdepth: 2
2143

22-
voxelwise
44+
voxelwise_package
2345

doc/voxelwise_modeling.rst

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
The voxelwise modeling framework
2+
================================
3+
4+
VM Framework
5+
------------
6+
7+
Voxelwise modeling (VM) is a framework to perform functional magnetic resonance
8+
imaging (fMRI) data analysis.
9+
Over the years, VM has led to many high profile publications
10+
[1]_ [2]_ [3]_ [4]_ [5]_ [6]_ [7]_ [8]_ [9]_ [10]_ [11]_.
11+
12+
[...]
13+
14+
Critical improvements
15+
---------------------
16+
17+
VM provides multiple critical improvements over other approaches to fMRI data
18+
analysis:
19+
20+
#.
21+
Most methods for analyzing fMRI data rely on simple contrasts
22+
between a small number of conditions. In contrast, VM can efficiently analyze
23+
many different stimulus and task features simultaneously. This framework
24+
enables the analysis of complex naturalistic stimuli and tasks which contain
25+
a large number of features; for example, VM has been used with naturalistic images
26+
[1]_ [2]_, movies [3]_, and stories [8]_.
27+
28+
#.
29+
Unlike the traditional null hypothesis testing framework, VM is not prone
30+
to overfitting and type I error and generalizes to new subjects and stimuli .
31+
VM is a predictive modeling framework that
32+
evaluates model performance on a separate test data set not used during fitting.
33+
34+
#.
35+
VM performs an analysis in each subject’s native brain space instead of lossily
36+
transforming subjects into a common group space. This allows VM to produce
37+
results with maximal spatial resolution. Each subject provides their own fit
38+
and test data, so every subject provides a complete replication of all
39+
hypothesis tests.
40+
41+
#.
42+
VM produces high-dimensional functional maps rather than simple contrast
43+
maps or correlation matrices. These maps reflect the
44+
selectivity of each voxel to thousands of stimulus and task features spread
45+
across dozens of feature spaces. These functional maps are much more
46+
detailed than those produced using statistical parametric mapping (SPM),
47+
multivariate pattern analysis (MVPA), or representational similarity
48+
analysis (RSA).
49+
50+
#.
51+
VM recovers stable and interpretable functional parcellations, which
52+
respect individual variability in anatomy [8]_.
53+
54+
55+
References
56+
----------
57+
58+
.. [1] Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008).
59+
Identifying natural images from human brain activity.
60+
Nature, 452(7185), 352-355.
61+
62+
.. [2] Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009).
63+
Bayesian reconstruction of natural images from human brain activity.
64+
Neuron, 63(6), 902-915.
65+
66+
.. [3] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011).
67+
Reconstructing visual experiences from brain activity evoked by natural movies.
68+
Current Biology, 21(19), 1641-1646.
69+
70+
.. [4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
71+
A continuous semantic space describes the representation of thousands of
72+
object and action categories across the human brain.
73+
Neuron, 76(6), 1210-1224.
74+
75+
.. [5] Çukur, T., Nishimoto, S., Huth, A. G., & Gallant, J. L. (2013).
76+
Attention during natural vision warps semantic representation across the human brain.
77+
Nature neuroscience, 16(6), 763-770.
78+
79+
.. [6] Çukur, T., Huth, A. G., Nishimoto, S., & Gallant, J. L. (2013).
80+
Functional subdomains within human FFA.
81+
Journal of Neuroscience, 33(42), 16748-16766.
82+
83+
.. [7] Stansbury, D. E., Naselaris, T., & Gallant, J. L. (2013).
84+
Natural scene statistics account for the representation of scene categories
85+
in human visual cortex.
86+
Neuron, 79(5), 1025-1034
87+
88+
.. [8] Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016).
89+
Natural speech reveals the semantic maps that tile human cerebral cortex.
90+
Nature, 532(7600), 453-458.
91+
92+
.. [9] de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017).
93+
The hierarchical cortical organization of human speech processing.
94+
Journal of Neuroscience, 37(27), 6539-6557.
95+
96+
.. [10] Lescroart, M. D., & Gallant, J. L. (2019).
97+
Human scene-selective areas represent 3D configurations of surfaces.
98+
Neuron, 101(1), 178-192.
99+
100+
.. [11] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
101+
The representation of semantic information across human cerebral cortex
102+
during listening versus reading is invariant to stimulus modality.
103+
Journal of Neuroscience, 39(39), 7722-7736.
File renamed without changes.

tutorials/movies_3T/03_plot_motion_energy_model.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -254,6 +254,7 @@
254254
ylabel='motion energy model')
255255
plt.show()
256256

257+
257258
###############################################################################
258259
# Interestingly, the well predicted voxels are different in the two models.
259260
# To further describe these differences, we can plot the performances on a

voxelwise/viz.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -298,6 +298,7 @@ def plot_2d_flatmap_from_mapper(voxels_1, voxels_2, mapper_file, ax=None,
298298
cbar.imshow(cmap_image, aspect='equal',
299299
extent=(vmin, vmax, vmin2, vmax2))
300300
cbar.set(xlabel=label_1, ylabel=label_2)
301+
cbar.set(xticks=[vmin, vmax], yticks=[vmin2, vmax2])
301302

302303
# plot additional layers if present
303304
with h5py.File(mapper_file, mode='r') as hf:

0 commit comments

Comments
 (0)