1- The voxelwise modeling framework
2- ================================
1+ References
2+ ==========
33
4- VM Framework
5- ------------
4+ Voxelwise modeling framework
5+ ----------------------------
66
77Voxelwise modeling (VM) is a framework to perform functional magnetic resonance
8- imaging (fMRI) data analysis.
9- Over the years, VM has led to many high profile publications
10- [1 ]_ [2 ]_ [3 ]_ [4 ]_ [5 ]_ [6 ]_ [7 ]_ [8 ]_ [9 ]_ [10 ]_ [11 ]_.
8+ imaging (fMRI) data analysis. Over the years, VM has led to many high profile
9+ publications :ref: `[1]<kay2008> ` :ref: `[2]<nas2009> ` :ref: `[3]<nis2011> `
10+ :ref: `[4]<hut2012> ` :ref: `[5]<cuk2013> ` :ref: `[6]<cuk2013b> `
11+ :ref: `[7]<sta2013> ` :ref: `[8]<hut2016> ` :ref: `[9]<deh2017> `
12+ :ref: `[10]<les2019> ` :ref: `[11]<den2019> ` :ref: `[12]<nun2019> `.
1113
1214[...]
1315
@@ -18,86 +20,129 @@ VM provides multiple critical improvements over other approaches to fMRI data
1820analysis:
1921
2022#.
21- Most methods for analyzing fMRI data rely on simple contrasts
22- between a small number of conditions. In contrast, VM can efficiently analyze
23- many different stimulus and task features simultaneously. This framework
24- enables the analysis of complex naturalistic stimuli and tasks which contain
25- a large number of features; for example, VM has been used with naturalistic images
26- [1 ]_ [2 ]_, movies [3 ]_, and stories [8 ]_.
23+ Most methods for analyzing fMRI data rely on simple contrasts between a
24+ small number of conditions. In contrast, VM can efficiently analyze many
25+ different stimulus and task features simultaneously. This framework enables
26+ the analysis of complex naturalistic stimuli and tasks which contain a
27+ large number of features; for example, VM has been used with naturalistic
28+ images :ref: `[1]<kay2008> ` :ref: `[2]<nas2009> `, movies :ref: `[3]<nis2011> `,
29+ and stories :ref: `[8]<hut2016> `.
2730
2831#.
2932 Unlike the traditional null hypothesis testing framework, VM is not prone
30- to overfitting and type I error and generalizes to new subjects and stimuli .
31- VM is a predictive modeling framework that
32- evaluates model performance on a separate test data set not used during fitting.
33+ to overfitting and type I error and generalizes to new subjects and stimuli
34+ . VM is a predictive modeling framework that evaluates model performance on
35+ a separate test data set not used during fitting.
3336
3437#.
35- VM performs an analysis in each subject’ s native brain space instead of lossily
36- transforming subjects into a common group space. This allows VM to produce
37- results with maximal spatial resolution. Each subject provides their own fit
38- and test data, so every subject provides a complete replication of all
39- hypothesis tests.
38+ VM performs an analysis in each subject' s native brain space instead of
39+ lossily transforming subjects into a common group space. This allows VM to
40+ produce results with maximal spatial resolution. Each subject provides
41+ their own fit and test data, so every subject provides a complete
42+ replication of all hypothesis tests.
4043
4144#.
42- VM produces high-dimensional functional maps rather than simple contrast
43- maps or correlation matrices. These maps reflect the
44- selectivity of each voxel to thousands of stimulus and task features spread
45- across dozens of feature spaces. These functional maps are much more
46- detailed than those produced using statistical parametric mapping (SPM),
47- multivariate pattern analysis (MVPA), or representational similarity
48- analysis (RSA).
45+ VM produces high-dimensional functional maps rather than simple contrast
46+ maps or correlation matrices. These maps reflect the selectivity of each
47+ voxel to thousands of stimulus and task features spread across dozens of
48+ feature spaces. These functional maps are much more detailed than those
49+ produced using statistical parametric mapping (SPM), multivariate pattern
50+ analysis (MVPA), or representational similarity analysis (RSA).
4951
5052#.
5153 VM recovers stable and interpretable functional parcellations, which
52- respect individual variability in anatomy [8 ]_ .
54+ respect individual variability in anatomy :ref: ` [8]<hut2016> ` .
5355
5456
5557References
5658----------
5759
58- .. [1 ] Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008).
60+ .. _kay2008 :
61+
62+ [1] Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008).
5963 Identifying natural images from human brain activity.
6064 Nature, 452(7185), 352-355.
6165
62- .. [2 ] Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009).
66+ .. _nas2009 :
67+
68+ [2] Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009).
6369 Bayesian reconstruction of natural images from human brain activity.
6470 Neuron, 63(6), 902-915.
6571
66- .. [3 ] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011).
72+ .. _nis2011 :
73+
74+ [3] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011).
6775 Reconstructing visual experiences from brain activity evoked by natural movies.
6876 Current Biology, 21(19), 1641-1646.
6977
70- .. [4 ] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
78+ .. _hut2012 :
79+
80+ [4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
7181 A continuous semantic space describes the representation of thousands of
7282 object and action categories across the human brain.
7383 Neuron, 76(6), 1210-1224.
7484
75- .. [5 ] Çukur, T., Nishimoto, S., Huth, A. G., & Gallant, J. L. (2013).
85+ .. _cuk2013 :
86+
87+ [5] Çukur, T., Nishimoto, S., Huth, A. G., & Gallant, J. L. (2013).
7688 Attention during natural vision warps semantic representation across the human brain.
7789 Nature neuroscience, 16(6), 763-770.
7890
79- .. [6 ] Çukur, T., Huth, A. G., Nishimoto, S., & Gallant, J. L. (2013).
91+ .. _cuk2013b :
92+
93+ [6] Çukur, T., Huth, A. G., Nishimoto, S., & Gallant, J. L. (2013).
8094 Functional subdomains within human FFA.
8195 Journal of Neuroscience, 33(42), 16748-16766.
8296
83- .. [7 ] Stansbury, D. E., Naselaris, T., & Gallant, J. L. (2013).
97+ .. _sta2013 :
98+
99+ [7] Stansbury, D. E., Naselaris, T., & Gallant, J. L. (2013).
84100 Natural scene statistics account for the representation of scene categories
85101 in human visual cortex.
86102 Neuron, 79(5), 1025-1034
87103
88- .. [8 ] Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016).
104+ .. _hut2016 :
105+
106+ [8] Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., & Gallant, J. L. (2016).
89107 Natural speech reveals the semantic maps that tile human cerebral cortex.
90108 Nature, 532(7600), 453-458.
91109
92- .. [9 ] de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017).
110+ .. _deh2017 :
111+
112+ [9] de Heer, W. A., Huth, A. G., Griffiths, T. L., Gallant, J. L., & Theunissen, F. E. (2017).
93113 The hierarchical cortical organization of human speech processing.
94114 Journal of Neuroscience, 37(27), 6539-6557.
95115
96- .. [10 ] Lescroart, M. D., & Gallant, J. L. (2019).
116+ .. _les2019 :
117+
118+ [10] Lescroart, M. D., & Gallant, J. L. (2019).
97119 Human scene-selective areas represent 3D configurations of surfaces.
98120 Neuron, 101(1), 178-192.
99121
100- .. [11 ] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
122+ .. _den2019 :
123+
124+ [11] Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
101125 The representation of semantic information across human cerebral cortex
102126 during listening versus reading is invariant to stimulus modality.
103- Journal of Neuroscience, 39(39), 7722-7736.
127+ Journal of Neuroscience, 39(39), 7722-7736.
128+
129+ .. _nun2019 :
130+
131+ [12] Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
132+ Voxelwise encoding models with non-spherical multivariate normal priors.
133+ Neuroimage, 197, 482-492.
134+
135+ Datasets
136+ --------
137+ .. _nis2011data :
138+
139+ [3b] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
140+ B., & Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data.
141+ CRCNS.org. http://dx.doi.org/10.6080/K00Z715X
142+
143+ .. _hut2012data :
144+
145+ [4b] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):
146+ Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.
147+ http://dx.doi.org/10.6080/TBD
148+
0 commit comments