Skip to content

Commit 3c7f6ac

Browse files
committed
MNT update notebooks
1 parent 5567f63 commit 3c7f6ac

13 files changed

Lines changed: 294 additions & 69 deletions

tutorials/notebooks/movies_3T/00_download_vim5.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@
100100
"name": "python",
101101
"nbconvert_exporter": "python",
102102
"pygments_lexer": "ipython3",
103-
"version": "3.7.3"
103+
"version": "3.8.3"
104104
}
105105
},
106106
"nbformat": 4,
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {
7+
"collapsed": false
8+
},
9+
"outputs": [],
10+
"source": [
11+
"%matplotlib inline"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"\n# Setup Google Colab\n\nIn this script, we setup a Google Colab environment. This script will only work\nwhen run from `Google Colab <https://colab.research.google.com/>`_). You can\nskip it if you run the tutorials on your machine.\n"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {
25+
"collapsed": false
26+
},
27+
"outputs": [],
28+
"source": [
29+
""
30+
]
31+
},
32+
{
33+
"cell_type": "markdown",
34+
"metadata": {},
35+
"source": [
36+
"## Change runtime to use a GPU\n\nThis tutorial is much faster when a GPU is available to run the computations.\nIn Google Colab you can request access to a GPU by changing the runtime type. \nTo do so, click the following menu options in Google Colab: \n\n(Menu) \"Runtime\" -> \"Change runtime type\" -> \"Hardware accelerator\" -> \"GPU\".\n\n"
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"## Download the data and install all required dependencies\n\nUncomment and run the following cell to download the tutorial data and\ninstall the required dependencies\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"# !gdown --id 1b0I0Ytj06m6GCmfxfNrZuyF97fDo3NZb && \\\n# tar xzf vim-5-for-ccn.tar.gz && \\\n# pip install -q voxelwise_tutorials && \\\n# git clone https://github.com/gallantlab/pycortex"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"Now run the following cell to set up the environment variables for the tutorials\nand pycortex.\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"import os\nos.environ['VOXELWISE_TUTORIALS_DATA'] = \"/content\"\n\nimport cortex\nfilestore = \"/content/pycortex/filestore/\"\ncortex.options.config['basic']['filestore'] = filestore\ncortex.options.config['webgl']['colormaps'] = \"/content/pycortex/filestore/colormaps\"\ncortex.database.db = cortex.database.Database(filestore)\ncortex.db = cortex.database.db\ncortex.utils.db = cortex.database.db\ncortex.dataset.braindata.db = cortex.database.db"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"Your Google Colab environment is now set up for the voxelwise tutorials.\n\n"
80+
]
81+
}
82+
],
83+
"metadata": {
84+
"kernelspec": {
85+
"display_name": "Python 3",
86+
"language": "python",
87+
"name": "python3"
88+
},
89+
"language_info": {
90+
"codemirror_mode": {
91+
"name": "ipython",
92+
"version": 3
93+
},
94+
"file_extension": ".py",
95+
"mimetype": "text/x-python",
96+
"name": "python",
97+
"nbconvert_exporter": "python",
98+
"pygments_lexer": "ipython3",
99+
"version": "3.8.3"
100+
}
101+
},
102+
"nbformat": 4,
103+
"nbformat_minor": 0
104+
}

tutorials/notebooks/movies_3T/00_setup_colab.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
},
2727
"outputs": [],
2828
"source": [
29-
"#"
29+
""
3030
]
3131
},
3232
{
@@ -103,7 +103,7 @@
103103
"name": "python",
104104
"nbconvert_exporter": "python",
105105
"pygments_lexer": "ipython3",
106-
"version": "3.7.3"
106+
"version": "3.8.3"
107107
}
108108
},
109109
"nbformat": 4,

tutorials/notebooks/movies_3T/01_plot_explainable_variance.ipynb

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@
209209
"cell_type": "markdown",
210210
"metadata": {},
211211
"source": [
212-
"## Map to \"fsaverage\"\n\nThe second mapper we provide maps the voxel data to a Freesurfer\naverage surface (\"fsaverage\"), that can be used in ``pycortex``.\nFirst, let's download the \"fsaverage\" surface.\n\n"
212+
"## Map to \"fsaverage\"\n\nThe second mapper we provide maps the voxel data to a Freesurfer\naverage surface (\"fsaverage\"), that can be used in ``pycortex``.\n\nIf you are running the notebook on Colab, you might need to update the\npycortex filestore as following:\n\n"
213213
]
214214
},
215215
{
@@ -220,14 +220,14 @@
220220
},
221221
"outputs": [],
222222
"source": [
223-
"import cortex\n\nsurface = \"fsaverage\"\n\nif not hasattr(cortex.db, surface):\n cortex.utils.download_subject(subject_id=surface)"
223+
"import cortex\ntry:\n import google.colab # noqa\n in_colab = True\nexcept ImportError:\n in_colab = False\nprint(in_colab)\n\nif in_colab:\n filestore = cortex.options.config['basic']['filestore']\n cortex.database.db = cortex.database.Database(filestore)\n cortex.db = cortex.database.db\n cortex.utils.db = cortex.database.db\n cortex.dataset.braindata.db = cortex.database.db\n cortex.quickflat.utils.db = cortex.database.db\n cortex.quickflat.composite.db = cortex.database.db"
224224
]
225225
},
226226
{
227227
"cell_type": "markdown",
228228
"metadata": {},
229229
"source": [
230-
"If you are running the notebook on Colab, you might need to update the\npycortex filestore as following:\n\n"
230+
"Now, let's download the \"fsaverage\" surface.\n\n"
231231
]
232232
},
233233
{
@@ -238,7 +238,7 @@
238238
},
239239
"outputs": [],
240240
"source": [
241-
"try:\n import google.colab # noqa\n in_colab = True\nexcept ImportError:\n in_colab = False\nprint(in_colab)\n\nif in_colab:\n filestore = cortex.options.config['basic']['filestore']\n cortex.database.db = cortex.database.Database(filestore)\n cortex.db = cortex.database.db\n cortex.utils.db = cortex.database.db\n cortex.dataset.braindata.db = cortex.database.db\n cortex.quickflat.utils.db = cortex.database.db\n cortex.quickflat.composite.db = cortex.database.db"
241+
"surface = \"fsaverage\"\n\nif not hasattr(cortex.db, surface):\n cortex.utils.download_subject(subject_id=surface)"
242242
]
243243
},
244244
{
@@ -281,7 +281,7 @@
281281
"cell_type": "markdown",
282282
"metadata": {},
283283
"source": [
284-
"To start an interactive 3D viewer in the browser, use the ``webshow``\nfunction.\n\n"
284+
"To start an interactive 3D viewer in the browser, we can use the ``webshow``\nfunction in pycortex.\nIf you are running the notebook on Colab, you first need to tunnel the pycortex\napplication out of Colab. To do so, use the following cell to start a tunnel\nwith ``ngrok`` and to get an address where the pycortex viewer will be made\naccessible.\n\n"
285285
]
286286
},
287287
{
@@ -292,14 +292,14 @@
292292
},
293293
"outputs": [],
294294
"source": [
295-
"if False:\n cortex.webshow(vertex, open_browser=False, port=8050)"
295+
"if in_colab:\n from IPython import get_ipython\n get_ipython().system_raw('./ngrok http 8050 &')\n plt.pause(1)\n\n command = \"\"\"\n curl -s http://localhost:4040/api/tunnels | python3 -c \\\n \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])\"\n \"\"\"\n result = get_ipython().getoutput(command, split=True)\n print(\"Use the following address to connect to the brain viewer:\\n\"\n f\"{result}\\n\"\n \"and not the one proposed by pycortex ('Open viewer: ...')\\n\")"
296296
]
297297
},
298298
{
299299
"cell_type": "markdown",
300300
"metadata": {},
301301
"source": [
302-
"If you are running the notebook on Colab, you need to tunnel the pycortex\napplication out of Colab. To do so, use the following cell to start a tunnel\nwith ``ngrok`` and to get an address where the pycortex viewer will be made\naccessible.\n\n"
302+
"Now you can start an interactive 3D viewer by changing ``run_webshow`` to\n``True`` and running the following cell. If you are using Colab, remember to\nuse the address returned by ngrok in the cell above rather than the address\nreturned by this cell.\n\n"
303303
]
304304
},
305305
{
@@ -310,7 +310,7 @@
310310
},
311311
"outputs": [],
312312
"source": [
313-
"if in_colab:\n from IPython import get_ipython\n get_ipython().system_raw('./ngrok http 8050 &')\n plt.pause(1)\n\n command = \"\"\"\n curl -s http://localhost:4040/api/tunnels | python3 -c \\\n \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])\"\n \"\"\"\n result = get_ipython().getoutput(command, split=True)\n print(\"Use the following address to connect to the brain viewer:\\n\"\n f\"{result}\\n\"\n \"and not the one proposed by pycortex ('Open viewer: ...')\\n\")"
313+
"run_webshow = False\nif run_webshow:\n cortex.webshow(vertex, open_browser=False, port=8050)"
314314
]
315315
},
316316
{
@@ -328,7 +328,7 @@
328328
},
329329
"outputs": [],
330330
"source": [
331-
"from cortex.testing_utils import has_installed\n\n\nfig = cortex.quickshow(vertex, colorbar_location='right',\n with_rois=has_installed(\"inkscape\"))\nplt.show()"
331+
"from cortex.testing_utils import has_installed\n\nfig = cortex.quickshow(vertex, colorbar_location='right',\n with_rois=has_installed(\"inkscape\"))\nplt.show()"
332332
]
333333
},
334334
{
@@ -355,7 +355,7 @@
355355
"name": "python",
356356
"nbconvert_exporter": "python",
357357
"pygments_lexer": "ipython3",
358-
"version": "3.7.3"
358+
"version": "3.8.3"
359359
}
360360
},
361361
"nbformat": 4,

tutorials/notebooks/movies_3T/02_plot_wordnet_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -653,7 +653,7 @@
653653
"name": "python",
654654
"nbconvert_exporter": "python",
655655
"pygments_lexer": "ipython3",
656-
"version": "3.7.3"
656+
"version": "3.8.3"
657657
}
658658
},
659659
"nbformat": 4,

tutorials/notebooks/movies_3T/03_plot_hemodynamic_response.ipynb

Lines changed: 38 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -213,6 +213,42 @@
213213
"pipeline.fit(X_train, Y_train)\n\nscores = pipeline.score(X_test, Y_test)\nscores = backend.to_numpy(scores)\nprint(\"(n_voxels,) =\", scores.shape)"
214214
]
215215
},
216+
{
217+
"cell_type": "markdown",
218+
"metadata": {},
219+
"source": [
220+
"## Intermission: understanding delays\n\nTo have an intuitive understanding of what we accomplish by delaying the\nfeatures before model fitting, we will simulate one voxel and a single\nfeature. We will then create a ``Delayer`` object (which was used in the\nprevious pipeline) and visualize its effect on our single feature. Let's\nstart by simulating the data.\n\n"
221+
]
222+
},
223+
{
224+
"cell_type": "code",
225+
"execution_count": null,
226+
"metadata": {
227+
"collapsed": false
228+
},
229+
"outputs": [],
230+
"source": [
231+
"# number of total trs\nn_trs = 50\n# repetition time for the simulated data\nTR = 2.0\nrng = np.random.RandomState(42)\ny = rng.randn(n_trs)\nx = np.zeros(n_trs)\n# add some arbitrary value to our feature\nx[15:20] = .5\nx += rng.randn(n_trs) * 0.1 # add some noise\n\n# create a delayer object and delay the features\ndelayer = Delayer(delays=[0, 1, 2, 3, 4])\nx_delayed = delayer.fit_transform(x[:, None])"
232+
]
233+
},
234+
{
235+
"cell_type": "markdown",
236+
"metadata": {},
237+
"source": [
238+
"In the next cell we are plotting six lines. The subplot at the top shows the\nsimulated BOLD response, while the other subplots show the simulated feature\nat different delays. The effect of the delayer is clear: it creates multiple\ncopies of the original feature shifted forward in time by how many samples we\nrequested (in this case, from 0 to 4 samples, which correspond to 0, 2, 4, 6,\nand 8 s in time with a 2 s TR).\n\nWhen these delayed features are used to fit a voxelwise encoding model, the\nbrain response $y$ at time $t$ is simultaneously modeled by the\nfeature $x$ at times $t-0, t-2, t-4, t-6, t-8$. In the remaining\nof this example we will see that this method improves model prediction accuracy\nand it allows to account for the underlying shape of the hemodynamic response\nfunction.\n\n"
239+
]
240+
},
241+
{
242+
"cell_type": "code",
243+
"execution_count": null,
244+
"metadata": {
245+
"collapsed": false
246+
},
247+
"outputs": [],
248+
"source": [
249+
"import matplotlib.pyplot as plt\nfig, axs = plt.subplots(6, 1, figsize=(8, 6.5), constrained_layout=True, \n sharex=True)\ntimes = np.arange(n_trs)*TR\n\naxs[0].plot(times, y, color=\"r\")\naxs[0].set_title(\"BOLD response\")\nfor i, (ax, xx) in enumerate(zip(axs.flat[1:], x_delayed.T)):\n ax.plot(times, xx, color='k')\n ax.set_title(\"$x(t - {0:.0f})$ (feature delayed by {1} sample{2})\".format(\n i*TR, i, \"\" if i == 1 else \"s\"))\nfor ax in axs.flat:\n ax.axvline(40, color='gray')\n ax.set_yticks([])\n_ = axs[-1].set_xlabel(\"Time [s]\")\nplt.show()"
250+
]
251+
},
216252
{
217253
"cell_type": "markdown",
218254
"metadata": {},
@@ -264,7 +300,7 @@
264300
},
265301
"outputs": [],
266302
"source": [
267-
"import matplotlib.pyplot as plt\nfrom voxelwise_tutorials.viz import plot_hist2d\n\nax = plot_hist2d(scores_no_delay, scores)\nax.set(\n title='Generalization R2 scores',\n xlabel='model without delays',\n ylabel='model with delays',\n)\nplt.show()"
303+
"from voxelwise_tutorials.viz import plot_hist2d\n\nax = plot_hist2d(scores_no_delay, scores)\nax.set(\n title='Generalization R2 scores',\n xlabel='model without delays',\n ylabel='model with delays',\n)\nplt.show()"
268304
]
269305
},
270306
{
@@ -316,7 +352,7 @@
316352
"name": "python",
317353
"nbconvert_exporter": "python",
318354
"pygments_lexer": "ipython3",
319-
"version": "3.7.3"
355+
"version": "3.8.3"
320356
}
321357
},
322358
"nbformat": 4,

tutorials/notebooks/movies_3T/04_plot_motion_energy_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,7 @@
341341
"name": "python",
342342
"nbconvert_exporter": "python",
343343
"pygments_lexer": "ipython3",
344-
"version": "3.7.3"
344+
"version": "3.8.3"
345345
}
346346
},
347347
"nbformat": 4,

tutorials/notebooks/movies_3T/05_plot_banded_ridge_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -510,7 +510,7 @@
510510
"name": "python",
511511
"nbconvert_exporter": "python",
512512
"pygments_lexer": "ipython3",
513-
"version": "3.7.3"
513+
"version": "3.8.3"
514514
}
515515
},
516516
"nbformat": 4,

tutorials/notebooks/movies_3T/06_extract_motion_energy.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@
136136
"name": "python",
137137
"nbconvert_exporter": "python",
138138
"pygments_lexer": "ipython3",
139-
"version": "3.7.3"
139+
"version": "3.8.3"
140140
}
141141
},
142142
"nbformat": 4,

0 commit comments

Comments
 (0)