You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"\n# Setup Google Colab\n\nIn this script, we setup a Google Colab environment. This script will only work\nwhen run from `Google Colab <https://colab.research.google.com/>`_). You can\nskip it if you run the tutorials on your machine.\n"
19
+
]
20
+
},
21
+
{
22
+
"cell_type": "code",
23
+
"execution_count": null,
24
+
"metadata": {
25
+
"collapsed": false
26
+
},
27
+
"outputs": [],
28
+
"source": [
29
+
""
30
+
]
31
+
},
32
+
{
33
+
"cell_type": "markdown",
34
+
"metadata": {},
35
+
"source": [
36
+
"## Change runtime to use a GPU\n\nThis tutorial is much faster when a GPU is available to run the computations.\nIn Google Colab you can request access to a GPU by changing the runtime type.\nTo do so, click the following menu options in Google Colab:\n\n(Menu) \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\n\n"
37
+
]
38
+
},
39
+
{
40
+
"cell_type": "markdown",
41
+
"metadata": {},
42
+
"source": [
43
+
"## Download the data and install all required dependencies\n\nUncomment and run the following cell to download the required packages.\n\n"
"For the record, here is what each command does:\n\n"
62
+
]
63
+
},
64
+
{
65
+
"cell_type": "code",
66
+
"execution_count": null,
67
+
"metadata": {
68
+
"collapsed": false
69
+
},
70
+
"outputs": [],
71
+
"source": [
72
+
"# - Set up an email and user name to use git, git-annex, and datalad (required to download the data)\n# - Add NeuroDebian to the package sources\n# - Update the gpg keys to use NeuroDebian\n# - Update the list of available packages\n# - Install Inkscape to use more features from Pycortex, and install git-annex to download the data\n# - Install the tutorial helper package, and all the required dependencies\n# - Download ngrok to create a tunnel for pycortex 3D brain viewer\n# - Extract the ngrok archive"
73
+
]
74
+
},
75
+
{
76
+
"cell_type": "markdown",
77
+
"metadata": {},
78
+
"source": [
79
+
"Now run the following cell to download the data for the tutorials.\n\n"
Copy file name to clipboardExpand all lines: tutorials/notebooks/shortclips/01_plot_explainable_variance.ipynb
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@
15
15
"cell_type": "markdown",
16
16
"metadata": {},
17
17
"source": [
18
-
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of \nbrain responses over repeats of the same stimulus or experiment. The variance\nof the estimated stimulus-dependent signal, which we call the explainable\nvariance, is proportional to the maximum prediction accuracy that can be\nobtained by a voxelwise encoding model in the test set. \n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in\na voxel for each of the $N$ repetitions of the same stimulus and \n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries\nbetween brain response and average brain response as $r_i = y_i - \\bar{y}$.\nThe explainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable\nvariance is also known as the *signal power*. For more information, see these\nreferences [1]_ [2]_ [3]_.\n"
18
+
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of brain\nresponses over repeats of the same stimulus or experiment. The variance of the\nestimated stimulus-dependent signal, which we call the explainable variance, is\nproportional to the maximum prediction accuracy that can be obtained by a\nvoxelwise encoding model in the test set.\n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in a\nvoxel for each of the $N$ repetitions of the same stimulus and\n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries between\nbrain response and average brain response as $r_i = y_i - \\bar{y}$. The\nexplainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable variance is also known as the *signal\npower*. For more information, see these references [1]_ [2]_ [3]_.\n"
19
19
]
20
20
},
21
21
{
@@ -195,7 +195,7 @@
195
195
"cell_type": "markdown",
196
196
"metadata": {},
197
197
"source": [
198
-
"This figure is a flattened map of the cortical surface. A number of regions of\ninterest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
198
+
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
0 commit comments