Skip to content

Commit a2a24b9

Browse files
mvdocTomDLT
andauthored
FIX use RidgeCV instead of KernelRidgeCV to fit the model without delays in notebook 4 (#16)
Co-authored-by: Tom Dupré la Tour <tom.duprelatour.10@gmail.com>
1 parent 3a6cbb4 commit a2a24b9

15 files changed

Lines changed: 223 additions & 268 deletions

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
.pytest_cache
66
__pycache__
77
.vscode/
8+
.idea/
89
build/
910
dist/
1011

tutorials/notebooks/shortclips/00_download_shortclips.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@
8989
"name": "python",
9090
"nbconvert_exporter": "python",
9191
"pygments_lexer": "ipython3",
92-
"version": "3.9.0"
92+
"version": "3.8.3"
9393
}
9494
},
9595
"nbformat": 4,
Lines changed: 138 additions & 193 deletions
Original file line numberDiff line numberDiff line change
@@ -1,195 +1,140 @@
11
{
2-
"cells": [
3-
{
4-
"cell_type": "code",
5-
"execution_count": null,
6-
"metadata": {},
7-
"outputs": [],
8-
"source": [
9-
"%matplotlib inline"
10-
]
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {
7+
"collapsed": false
8+
},
9+
"outputs": [],
10+
"source": [
11+
"%matplotlib inline"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"\n# Setup Google Colab\n\nIn this script, we setup a Google Colab environment. This script will only work\nwhen run from `Google Colab <https://colab.research.google.com/>`_). You can\nskip it if you run the tutorials on your machine.\n"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {
25+
"collapsed": false
26+
},
27+
"outputs": [],
28+
"source": [
29+
""
30+
]
31+
},
32+
{
33+
"cell_type": "markdown",
34+
"metadata": {},
35+
"source": [
36+
"## Change runtime to use a GPU\n\nThis tutorial is much faster when a GPU is available to run the computations.\nIn Google Colab you can request access to a GPU by changing the runtime type.\nTo do so, click the following menu options in Google Colab:\n\n(Menu) \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\n\n"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"## Download the data and install all required dependencies\n\nUncomment and run the following cell to download the required packages.\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"#!git config --global user.email \"you@example.com\" && git config --global user.name \"Your Name\"\n#!wget -O- http://neuro.debian.net/lists/impish.us-ca.libre | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list\n#!apt-key adv --recv-keys --keyserver hkps://keyserver.ubuntu.com 0xA5D32F012649A5A9 > /dev/null\n#!apt-get -qq update > /dev/null\n#!apt-get install -qq inkscape git-annex-standalone > /dev/null\n#!pip install -q voxelwise_tutorials\n#![ -f \"ngrok-stable-linux-amd64.zip\" ] || wget -q https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n#![ -f \"ngrok\" ] || unzip ngrok-stable-linux-amd64.zip"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"For the record, here is what each command does:\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"# - Set up an email and user name to use git, git-annex, and datalad (required to download the data)\n# - Add NeuroDebian to the package sources\n# - Update the gpg keys to use NeuroDebian\n# - Update the list of available packages\n# - Install Inkscape to use more features from Pycortex, and install git-annex to download the data\n# - Install the tutorial helper package, and all the required dependencies\n# - Download ngrok to create a tunnel for pycortex 3D brain viewer\n# - Extract the ngrok archive"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"Now run the following cell to download the data for the tutorials.\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": null,
85+
"metadata": {
86+
"collapsed": false
87+
},
88+
"outputs": [],
89+
"source": [
90+
"from voxelwise_tutorials.io import download_datalad\n\nDATAFILES = [\n \"features/motion_energy.hdf\",\n \"features/wordnet.hdf\",\n \"mappers/S01_mappers.hdf\",\n \"responses/S01_responses.hdf\",\n]\n\nsource = \"https://gin.g-node.org/gallantlab/shortclips\"\ndestination = \"/content/shortclips\"\n\nfor datafile in DATAFILES:\n local_filename = download_datalad(\n datafile,\n destination=destination,\n source=source\n )"
91+
]
92+
},
93+
{
94+
"cell_type": "markdown",
95+
"metadata": {},
96+
"source": [
97+
"Now run the following cell to set up the environment variables for the\ntutorials and pycortex.\n\n"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"import os\nos.environ['VOXELWISE_TUTORIALS_DATA'] = \"/content\"\n\nimport sklearn\nsklearn.set_config(assume_finite=True)"
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"metadata": {},
114+
"source": [
115+
"Your Google Colab environment is now set up for the voxelwise tutorials.\n\n"
116+
]
117+
}
118+
],
119+
"metadata": {
120+
"kernelspec": {
121+
"display_name": "Python 3",
122+
"language": "python",
123+
"name": "python3"
124+
},
125+
"language_info": {
126+
"codemirror_mode": {
127+
"name": "ipython",
128+
"version": 3
129+
},
130+
"file_extension": ".py",
131+
"mimetype": "text/x-python",
132+
"name": "python",
133+
"nbconvert_exporter": "python",
134+
"pygments_lexer": "ipython3",
135+
"version": "3.8.3"
136+
}
11137
},
12-
{
13-
"cell_type": "markdown",
14-
"metadata": {},
15-
"source": [
16-
"\n",
17-
"# Setup Google Colab\n",
18-
"\n",
19-
"In this script, we setup a Google Colab environment. This script will only work\n",
20-
"when run from `Google Colab <https://colab.research.google.com/>`_). You can\n",
21-
"skip it if you run the tutorials on your machine.\n"
22-
]
23-
},
24-
{
25-
"cell_type": "markdown",
26-
"metadata": {},
27-
"source": [
28-
"## Change runtime to use a GPU\n",
29-
"\n",
30-
"This tutorial is much faster when a GPU is available to run the computations.\n",
31-
"In Google Colab you can request access to a GPU by changing the runtime type.\n",
32-
"To do so, click the following menu options in Google Colab:\n",
33-
"\n",
34-
"(Menu) \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\n",
35-
"\n"
36-
]
37-
},
38-
{
39-
"cell_type": "markdown",
40-
"metadata": {},
41-
"source": [
42-
"## Download the data and install all required dependencies\n",
43-
"\n",
44-
"Uncomment and run the following cell to download the required packages.\n",
45-
"\n"
46-
]
47-
},
48-
{
49-
"cell_type": "code",
50-
"execution_count": null,
51-
"metadata": {},
52-
"outputs": [],
53-
"source": [
54-
"# !git config --global user.email \"you@example.com\" && git config --global user.name \"Your Name\"\n",
55-
"# !wget -O- http://neuro.debian.net/lists/impish.us-ca.libre | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list\n",
56-
"# !apt-key adv --recv-keys --keyserver hkps://keyserver.ubuntu.com 0xA5D32F012649A5A9 > /dev/null\n",
57-
"# !apt-get -qq update > /dev/null\n",
58-
"# !apt-get install -qq inkscape git-annex-standalone > /dev/null\n",
59-
"# !pip install -q voxelwise_tutorials \n",
60-
"# ![ -f \"ngrok-stable-linux-amd64.zip\" ] || wget -q https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n",
61-
"# ![ -f \"ngrok\" ] || unzip ngrok-stable-linux-amd64.zip"
62-
]
63-
},
64-
{
65-
"cell_type": "markdown",
66-
"metadata": {},
67-
"source": [
68-
"For the record, here is what each command does:\n",
69-
"\n"
70-
]
71-
},
72-
{
73-
"cell_type": "code",
74-
"execution_count": null,
75-
"metadata": {},
76-
"outputs": [],
77-
"source": [
78-
"# - Set up an email and user name to use git, git-annex, and datalad (required to download the data)\n",
79-
"# - Add NeuroDebian to the package sources\n",
80-
"# - Update the gpg keys to use NeuroDebian\n",
81-
"# - Update the list of available packages\n",
82-
"# - Install Inkscape to use more features from Pycortex, and install git-annex to download the data\n",
83-
"# - Install the tutorial helper package, and all the required dependencies\n",
84-
"# - Download ngrok to create a tunnel for pycortex 3D brain viewer\n",
85-
"# - Extract the ngrok archive"
86-
]
87-
},
88-
{
89-
"cell_type": "markdown",
90-
"metadata": {},
91-
"source": [
92-
"Now run the following cell to install the data for the tutorials."
93-
]
94-
},
95-
{
96-
"cell_type": "code",
97-
"execution_count": null,
98-
"metadata": {},
99-
"outputs": [],
100-
"source": [
101-
"colab_data_directory = \"/content/shortclips\"\n",
102-
"\n",
103-
"from voxelwise_tutorials.io import download_datalad\n",
104-
"\n",
105-
"DATAFILES = [\n",
106-
" \"features/motion_energy.hdf\",\n",
107-
" \"features/wordnet.hdf\",\n",
108-
" \"mappers/S01_mappers.hdf\",\n",
109-
" # \"mappers/S02_mappers.hdf\",\n",
110-
" # \"mappers/S03_mappers.hdf\",\n",
111-
" # \"mappers/S04_mappers.hdf\",\n",
112-
" # \"mappers/S05_mappers.hdf\",\n",
113-
" \"responses/S01_responses.hdf\",\n",
114-
" # \"responses/S02_responses.hdf\",\n",
115-
" # \"responses/S03_responses.hdf\",\n",
116-
" # \"responses/S04_responses.hdf\",\n",
117-
" # \"responses/S05_responses.hdf\",\n",
118-
" # \"stimuli/test.hdf\",\n",
119-
" # \"stimuli/train_00.hdf\",\n",
120-
" # \"stimuli/train_01.hdf\",\n",
121-
" # \"stimuli/train_02.hdf\",\n",
122-
" # \"stimuli/train_03.hdf\",\n",
123-
" # \"stimuli/train_04.hdf\",\n",
124-
" # \"stimuli/train_05.hdf\",\n",
125-
" # \"stimuli/train_06.hdf\",\n",
126-
" # \"stimuli/train_07.hdf\",\n",
127-
" # \"stimuli/train_08.hdf\",\n",
128-
" # \"stimuli/train_09.hdf\",\n",
129-
" # \"stimuli/train_10.hdf\",\n",
130-
" # \"stimuli/train_11.hdf\",\n",
131-
"]\n",
132-
"\n",
133-
"source = \"https://gin.g-node.org/gallantlab/shortclips\"\n",
134-
"\n",
135-
"for datafile in DATAFILES:\n",
136-
" local_filename = download_datalad(\n",
137-
" datafile, \n",
138-
" destination=colab_data_directory, \n",
139-
" source=source\n",
140-
" )"
141-
]
142-
},
143-
{
144-
"cell_type": "markdown",
145-
"metadata": {},
146-
"source": [
147-
"Now run the following cell to set up the environment variables for the\n",
148-
"tutorials and pycortex.\n",
149-
"\n"
150-
]
151-
},
152-
{
153-
"cell_type": "code",
154-
"execution_count": null,
155-
"metadata": {},
156-
"outputs": [],
157-
"source": [
158-
"import os\n",
159-
"os.environ['VOXELWISE_TUTORIALS_DATA'] = \"/content\"\n",
160-
"\n",
161-
"import sklearn\n",
162-
"sklearn.set_config(assume_finite=True)"
163-
]
164-
},
165-
{
166-
"cell_type": "markdown",
167-
"metadata": {},
168-
"source": [
169-
"Your Google Colab environment is now set up for the voxelwise tutorials.\n",
170-
"\n"
171-
]
172-
}
173-
],
174-
"metadata": {
175-
"kernelspec": {
176-
"display_name": "Python 3 (ipykernel)",
177-
"language": "python",
178-
"name": "python3"
179-
},
180-
"language_info": {
181-
"codemirror_mode": {
182-
"name": "ipython",
183-
"version": 3
184-
},
185-
"file_extension": ".py",
186-
"mimetype": "text/x-python",
187-
"name": "python",
188-
"nbconvert_exporter": "python",
189-
"pygments_lexer": "ipython3",
190-
"version": "3.9.12"
191-
}
192-
},
193-
"nbformat": 4,
194-
"nbformat_minor": 1
195-
}
138+
"nbformat": 4,
139+
"nbformat_minor": 0
140+
}

tutorials/notebooks/shortclips/01_plot_explainable_variance.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of \nbrain responses over repeats of the same stimulus or experiment. The variance\nof the estimated stimulus-dependent signal, which we call the explainable\nvariance, is proportional to the maximum prediction accuracy that can be\nobtained by a voxelwise encoding model in the test set. \n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in\na voxel for each of the $N$ repetitions of the same stimulus and \n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries\nbetween brain response and average brain response as $r_i = y_i - \\bar{y}$.\nThe explainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable\nvariance is also known as the *signal power*. For more information, see these\nreferences [1]_ [2]_ [3]_.\n"
18+
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of brain\nresponses over repeats of the same stimulus or experiment. The variance of the\nestimated stimulus-dependent signal, which we call the explainable variance, is\nproportional to the maximum prediction accuracy that can be obtained by a\nvoxelwise encoding model in the test set.\n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in a\nvoxel for each of the $N$ repetitions of the same stimulus and\n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries between\nbrain response and average brain response as $r_i = y_i - \\bar{y}$. The\nexplainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable variance is also known as the *signal\npower*. For more information, see these references [1]_ [2]_ [3]_.\n"
1919
]
2020
},
2121
{
@@ -195,7 +195,7 @@
195195
"cell_type": "markdown",
196196
"metadata": {},
197197
"source": [
198-
"This figure is a flattened map of the cortical surface. A number of regions of\ninterest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
198+
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
199199
]
200200
},
201201
{
@@ -337,7 +337,7 @@
337337
"name": "python",
338338
"nbconvert_exporter": "python",
339339
"pygments_lexer": "ipython3",
340-
"version": "3.9.0"
340+
"version": "3.8.3"
341341
}
342342
},
343343
"nbformat": 4,

tutorials/notebooks/shortclips/02_plot_ridge_regression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -406,7 +406,7 @@
406406
"name": "python",
407407
"nbconvert_exporter": "python",
408408
"pygments_lexer": "ipython3",
409-
"version": "3.9.0"
409+
"version": "3.8.3"
410410
}
411411
},
412412
"nbformat": 4,

tutorials/notebooks/shortclips/03_plot_wordnet_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -653,7 +653,7 @@
653653
"name": "python",
654654
"nbconvert_exporter": "python",
655655
"pygments_lexer": "ipython3",
656-
"version": "3.9.0"
656+
"version": "3.8.3"
657657
}
658658
},
659659
"nbformat": 4,

0 commit comments

Comments
 (0)