Skip to content

Commit 1ca3519

Browse files
committed
DOC move references to the end of the examples
1 parent 8ccbf57 commit 1ca3519

12 files changed

Lines changed: 92 additions & 47 deletions

tutorials/movies_3T/00_download_vim4.py

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,6 @@
1515
This tutorial is based on publicly available data `published on CRCNS
1616
<https://crcns.org/data-sets/vc/TBD>`_. If you publish any work using this data
1717
set, please cite the original publication [1]_, and the data set [2]_.
18-
19-
.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A
20-
continuous semantic space describes the representation of thousands of
21-
object and action categories across the human brain. Neuron, 76(6),
22-
1210-1224.
23-
24-
.. [2] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020): Gallant
25-
Lab Natural Movie 3T fMRI Data. CRCNS.org. http://dx.doi.org/10.6080/TBD
2618
"""
2719
# sphinx_gallery_thumbnail_path = "static/crcns.png"
2820

@@ -89,3 +81,16 @@
8981
for datafile in DATAFILES:
9082
local_filename = download_crcns(datafile, username, password,
9183
destination=directory)
84+
85+
###############################################################################
86+
# References
87+
# ----------
88+
#
89+
# .. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A
90+
# continuous semantic space describes the representation of thousands of
91+
# object and action categories across the human brain. Neuron, 76(6),
92+
# 1210-1224.
93+
#
94+
# .. [2] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):
95+
# Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.
96+
# http://dx.doi.org/10.6080/TBD

tutorials/movies_3T/04_plot_motion_energy_model.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
*Motion-energy features:* Motion-energy features result from filtering a video
1515
stimulus with spatio-temporal Gabor filters. A pyramid of filters is used to
1616
compute the motion-energy features at multiple spatial and temporal scales.
17+
Motion-energy features were introduced in [1]_.
1718
1819
*Summary:* As in the previous example, we first concatenate the features with
1920
multiple delays, to account for the slow hemodynamic response. A linear

tutorials/movies_3T/06_extract_motion_energy.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,16 +15,14 @@
1515
Motion-energy features were introduced in [1]_.
1616
1717
The motion-energy extraction is performed by the package `pymoten
18-
<https://github.com/gallantlab/pymoten>`_.
18+
<https://github.com/gallantlab/pymoten>`_. Check the pymoten `gallery of
19+
examples <https://gallantlab.github.io/pymoten/auto_examples/index.html>`_ for
20+
visualizing motion-energy filters, and for pymoten API usage examples.
1921
2022
Running time
2123
------------
2224
Extracting motion energy is a bit longer than the other examples. It typically
2325
takes a couple hours to run.
24-
25-
.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
26-
Gallant, J. L. (2011). Reconstructing visual experiences from brain
27-
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
2826
"""
2927
# sphinx_gallery_thumbnail_path = "static/moten.png"
3028
###############################################################################

tutorials/movies_4T/00_download_vim2.py

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,6 @@
1313
`published on CRCNS <https://crcns.org/data-sets/vc/vim-2/about-vim-2>`_.
1414
If you publish any work using this data set, please cite the original
1515
publication [1]_, and the data set [2]_.
16-
17-
.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
18-
Gallant, J. L. (2011). Reconstructing visual experiences from brain
19-
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
20-
21-
.. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
22-
Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data. CRCNS.org.
23-
http://dx.doi.org/10.6080/K00Z715X
2416
"""
2517
# sphinx_gallery_thumbnail_path = "static/crcns.png"
2618
###############################################################################
@@ -60,3 +52,14 @@
6052
for datafile in DATAFILES:
6153
local_filename = download_crcns(datafile, username, password,
6254
destination=directory, unpack=True)
55+
###############################################################################
56+
# References
57+
# ----------
58+
#
59+
# .. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
60+
# Gallant, J. L. (2011). Reconstructing visual experiences from brain
61+
# activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
62+
#
63+
# .. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
64+
# Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data. CRCNS.org.
65+
# http://dx.doi.org/10.6080/K00Z715X

tutorials/movies_4T/01_extract_motion_energy.py

Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,20 @@
55
66
This script describes how to extract motion-energy features from the stimuli.
77
8-
*Motion-energy features:*
9-
Motion-energy features result from filtering a video stimulus with
10-
spatio-temporal Gabor filters. A pyramid of filters is used to compute the
11-
motion-energy features at multiple spatial and temporal scales. Motion-energy
12-
features were introduced in [1]_.
8+
*Motion-energy features:* Motion-energy features result from filtering a video
9+
stimulus with spatio-temporal Gabor filters. A pyramid of filters is used to
10+
compute the motion-energy features at multiple spatial and temporal scales.
11+
Motion-energy features were introduced in [1]_.
1312
1413
The motion-energy extraction is performed by the package `pymoten
15-
<https://github.com/gallantlab/pymoten>`_.
14+
<https://github.com/gallantlab/pymoten>`_. Check the pymoten `gallery of
15+
examples <https://gallantlab.github.io/pymoten/auto_examples/index.html>`_ for
16+
visualizing motion-energy filters, and for pymoten API usage examples.
1617
1718
Running time
1819
------------
1920
Extracting motion energy is a bit longer than the other examples. It typically
2021
takes a couple hours to run.
21-
22-
.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
23-
Gallant, J. L. (2011). Reconstructing visual experiences from brain
24-
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
2522
"""
2623
# sphinx_gallery_thumbnail_path = "static/moten.png"
2724
###############################################################################
@@ -165,3 +162,11 @@ def compute_motion_energy(luminance,
165162
save_hdf5_dataset(
166163
os.path.join(features_directory, "motion_energy.hdf"),
167164
dataset=dict(X_train=motion_energy_train, X_test=motion_energy_test))
165+
166+
###############################################################################
167+
# References
168+
# ----------
169+
#
170+
# .. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
171+
# B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain
172+
# activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

tutorials/movies_4T/02_plot_ridge_model.py

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
*Motion-energy features:* Motion-energy features result from filtering a video
1515
stimulus with spatio-temporal Gabor filters. A pyramid of filters is used to
1616
compute the motion-energy features at multiple spatial and temporal scales.
17+
Motion-energy features were introduced in [1]_.
1718
1819
*Summary:* We first concatenate the features with multiple delays, to account
1920
for the slow hemodynamic response. A linear regression model then weights each
@@ -24,10 +25,6 @@
2425
cross-validation. Finally, the model generalization performance is evaluated on
2526
a held-out test set, comparing the model predictions with the ground-truth fMRI
2627
responses.
27-
28-
.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
29-
Gallant, J. L. (2011). Reconstructing visual experiences from brain
30-
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
3128
"""
3229
# sphinx_gallery_thumbnail_number = 2
3330
###############################################################################
@@ -241,15 +238,23 @@
241238
scores_nodelay = backend.to_numpy(scores_nodelay)
242239

243240
###############################################################################
244-
# Here we plot the comparison of model performances with a 2D histogram.
245-
# All ~70k voxels are represented in this histogram, where the diagonal
246-
# corresponds to identical performance for both models. A distibution deviating
247-
# from the diagonal means that one model has better predictive performances
248-
# than the other.
241+
# Here we plot the comparison of model performances with a 2D histogram. All
242+
# ~70k voxels are represented in this histogram, where the diagonal corresponds
243+
# to identical performance for both models. A distibution deviating from the
244+
# diagonal means that one model has better predictive performances than the
245+
# other.
249246

250247
from voxelwise_tutorials.viz import plot_hist2d
251248

252249
ax = plot_hist2d(scores_nodelay, scores)
253250
ax.set(title='Generalization R2 scores', xlabel='model without delays',
254251
ylabel='model with delays')
255252
plt.show()
253+
254+
###############################################################################
255+
# References
256+
# ----------
257+
#
258+
# .. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
259+
# B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain
260+
# activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

tutorials/notebooks/movies_3T/00_download_vim4.ipynb

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Download the data set from CRCNS\n\n\nIn this script, we download the data set from CRCNS. A (free) account is\nrequired.\n\n.. Warning:: The data has not been publicly released yet, so this notebook will\n not work !\n\nCite this data set\n------------------\n\nThis tutorial is based on publicly available data `published on CRCNS\n<https://crcns.org/data-sets/vc/TBD>`_. If you publish any work using this data\nset, please cite the original publication [1]_, and the data set [2]_.\n\n.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A\n continuous semantic space describes the representation of thousands of\n object and action categories across the human brain. Neuron, 76(6),\n 1210-1224.\n\n.. [2] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020): Gallant\n Lab Natural Movie 3T fMRI Data. CRCNS.org. http://dx.doi.org/10.6080/TBD\n"
18+
"\n# Download the data set from CRCNS\n\n\nIn this script, we download the data set from CRCNS. A (free) account is\nrequired.\n\n.. Warning:: The data has not been publicly released yet, so this notebook will\n not work !\n\nCite this data set\n------------------\n\nThis tutorial is based on publicly available data `published on CRCNS\n<https://crcns.org/data-sets/vc/TBD>`_. If you publish any work using this data\nset, please cite the original publication [1]_, and the data set [2]_.\n"
1919
]
2020
},
2121
{
@@ -75,6 +75,13 @@
7575
"source": [
7676
"username = input(\"CRCNS username: \")\npassword = getpass.getpass(\"CRCNS password: \")\n\nfor datafile in DATAFILES:\n local_filename = download_crcns(datafile, username, password,\n destination=directory)"
7777
]
78+
},
79+
{
80+
"cell_type": "markdown",
81+
"metadata": {},
82+
"source": [
83+
"References\n----------\n\n.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A\n continuous semantic space describes the representation of thousands of\n object and action categories across the human brain. Neuron, 76(6),\n 1210-1224.\n\n.. [2] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):\n Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.\n http://dx.doi.org/10.6080/TBD\n\n"
84+
]
7885
}
7986
],
8087
"metadata": {

tutorials/notebooks/movies_3T/04_plot_motion_energy_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Fit a ridge model with motion energy features\n\n\nIn this example, we model the fMRI responses with motion-energy features\nextracted from the movie stimulus. The model is a regularized linear regression\nmodel.\n\nThis tutorial reproduces part of the analysis described in Nishimoto et al\n(2011) [1]_. See this publication for more details about the experiment, the\nmotion-energy features, along with more results and more discussions.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\n\n*Summary:* As in the previous example, we first concatenate the features with\nmultiple delays, to account for the slow hemodynamic response. A linear\nregression model then weights each delayed feature with a different weight, to\nbuild a predictive model of BOLD activity. Again, the linear regression is\nregularized to improve robustness to correlated features and to improve\ngeneralization. The optimal regularization hyperparameter is selected\nindependently on each voxel over a grid-search with cross-validation. Finally,\nthe model generalization performance is evaluated on a held-out test set,\ncomparing the model predictions with the ground-truth fMRI responses.\n"
18+
"\n# Fit a ridge model with motion energy features\n\n\nIn this example, we model the fMRI responses with motion-energy features\nextracted from the movie stimulus. The model is a regularized linear regression\nmodel.\n\nThis tutorial reproduces part of the analysis described in Nishimoto et al\n(2011) [1]_. See this publication for more details about the experiment, the\nmotion-energy features, along with more results and more discussions.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\n*Summary:* As in the previous example, we first concatenate the features with\nmultiple delays, to account for the slow hemodynamic response. A linear\nregression model then weights each delayed feature with a different weight, to\nbuild a predictive model of BOLD activity. Again, the linear regression is\nregularized to improve robustness to correlated features and to improve\ngeneralization. The optimal regularization hyperparameter is selected\nindependently on each voxel over a grid-search with cross-validation. Finally,\nthe model generalization performance is evaluated on a held-out test set,\ncomparing the model predictions with the ground-truth fMRI responses.\n"
1919
]
2020
},
2121
{

tutorials/notebooks/movies_3T/06_extract_motion_energy.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Extract motion energy features from the stimuli\n\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n.. Note:: This public data set already contains precomputed motion-energy.\n Therefore, you do not need to run this script to fit motion-energy models\n in other part of this tutorial.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package `pymoten\n<https://github.com/gallantlab/pymoten>`_.\n\nRunning time\n------------\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n\n.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &\n Gallant, J. L. (2011). Reconstructing visual experiences from brain\n activity evoked by natural movies. Current Biology, 21(19), 1641-1646.\n"
18+
"\n# Extract motion energy features from the stimuli\n\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n.. Note:: This public data set already contains precomputed motion-energy.\n Therefore, you do not need to run this script to fit motion-energy models\n in other part of this tutorial.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package `pymoten\n<https://github.com/gallantlab/pymoten>`_. Check the pymoten `gallery of\nexamples <https://gallantlab.github.io/pymoten/auto_examples/index.html>`_ for\nvisualizing motion-energy filters, and for pymoten API usage examples.\n\nRunning time\n------------\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n"
1919
]
2020
},
2121
{

tutorials/notebooks/movies_4T/00_download_vim2.ipynb

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Download the data set from CRCNS\n\n\nIn this script, we download the data set from CRCNS.\nA (free) account is required.\n\nCite this data set\n------------------\n\nThis tutorial is based on publicly available data\n`published on CRCNS <https://crcns.org/data-sets/vc/vim-2/about-vim-2>`_.\nIf you publish any work using this data set, please cite the original\npublication [1]_, and the data set [2]_.\n\n.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &\n Gallant, J. L. (2011). Reconstructing visual experiences from brain\n activity evoked by natural movies. Current Biology, 21(19), 1641-1646.\n\n.. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &\n Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data. CRCNS.org.\n http://dx.doi.org/10.6080/K00Z715X\n"
18+
"\n# Download the data set from CRCNS\n\n\nIn this script, we download the data set from CRCNS.\nA (free) account is required.\n\nCite this data set\n------------------\n\nThis tutorial is based on publicly available data\n`published on CRCNS <https://crcns.org/data-sets/vc/vim-2/about-vim-2>`_.\nIf you publish any work using this data set, please cite the original\npublication [1]_, and the data set [2]_.\n"
1919
]
2020
},
2121
{
@@ -75,6 +75,13 @@
7575
"source": [
7676
"username = input(\"CRCNS username: \")\npassword = getpass.getpass(\"CRCNS password: \")\n\nfor datafile in DATAFILES:\n local_filename = download_crcns(datafile, username, password,\n destination=directory, unpack=True)"
7777
]
78+
},
79+
{
80+
"cell_type": "markdown",
81+
"metadata": {},
82+
"source": [
83+
"References\n----------\n\n.. [1] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &\n Gallant, J. L. (2011). Reconstructing visual experiences from brain\n activity evoked by natural movies. Current Biology, 21(19), 1641-1646.\n\n.. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &\n Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data. CRCNS.org.\n http://dx.doi.org/10.6080/K00Z715X\n\n"
84+
]
7885
}
7986
],
8087
"metadata": {

0 commit comments

Comments
 (0)