Skip to content

Commit 9f8cdc2

Browse files
committed
ENH add ngrok to Colab setup to tunnel brain viewer out of Colab
1 parent 3cbd480 commit 9f8cdc2

6 files changed

Lines changed: 159 additions & 41 deletions

File tree

doc/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ NBDIR = ../tutorials/notebooks/movies_3T
3737

3838
merge-notebooks:
3939
python merge_notebooks.py \
40-
$(NBDIR)/00_load_colab.ipynb \
40+
$(NBDIR)/00_setup_colab.ipynb \
4141
$(NBDIR)/01_plot_explainable_variance.ipynb \
4242
$(NBDIR)/02_plot_wordnet_model.ipynb \
4343
$(NBDIR)/03_plot_hemodynamic_response.ipynb \
Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,22 @@
2828

2929
# ![ -f "vim-5-for-ccn.tar.gz" ] || gdown --id 1b0I0Ytj06m6GCmfxfNrZuyF97fDo3NZb
3030
# ![ -d "vim-5" ] || tar xzf vim-5-for-ccn.tar.gz
31-
# ![ -d "pycortex" ] || git clone https://github.com/gallantlab/pycortex
31+
# ![ -d "pycortex" ] || git clone --quiet https://github.com/gallantlab/pycortex
32+
# !apt-get install -qq inkscape > /dev/null
3233
# !pip install -q voxelwise_tutorials
34+
# ![ -f "ngrok-stable-linux-amd64.zip" ] || wget -q https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
35+
# ![ -f "ngrok" ] || unzip ngrok-stable-linux-amd64.zip
3336

37+
###############################################################################
38+
# For the record, here is what each command does:
39+
#
40+
# - Download the dataset archive
41+
# - Extract the dataset archive
42+
# - Clone Pycortex to fix some filestore issues with Colab
43+
# - Install Inkscape, to use more features from Pycortex
44+
# - Install the tutorial helper package, and all the required dependencies
45+
# - Download ngrok to create a tunnel for pycortex 3D brain viewer
46+
# - Extract the ngrok archive
3447

3548
###############################################################################
3649
# Now run the following cell to set up the environment variables for the

tutorials/movies_3T/01_plot_explainable_variance.py

Lines changed: 37 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,14 @@
1515
are the same for each repetition of the stimulus. Thus, encoding models will
1616
predict only the repeatable stimulus-dependent signal.
1717
18-
The stimulus-dependent signal can be estimated by taking the mean of
18+
The stimulus-dependent signal can be estimated by taking the mean of
1919
brain responses over repeats of the same stimulus or experiment. The variance
2020
of the estimated stimulus-dependent signal, which we call the explainable
2121
variance, is proportional to the maximum prediction accuracy that can be
22-
obtained by a voxelwise encoding model in the test set.
22+
obtained by a voxelwise encoding model in the test set.
2323
2424
Mathematically, let :math:`y_i, i = 1 \\dots N` be the measured signal in
25-
a voxel for each of the :math:`N` repetitions of the same stimulus and
25+
a voxel for each of the :math:`N` repetitions of the same stimulus and
2626
:math:`\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i` the average brain response
2727
across repetitions. For each repeat, we define the residual timeseries
2828
between brain response and average brain response as :math:`r_i = y_i - \\bar{y}`.
@@ -114,7 +114,7 @@
114114
plt.show()
115115

116116
###############################################################################
117-
# We see that many voxels have low explainable variance. This is
117+
# We see that many voxels have low explainable variance. This is
118118
# expected, since many voxels are not driven by a visual stimulus, and their
119119
# response changes over repeats of the same stimulus.
120120
# We also see that some voxels have high explainable variance (around 0.7). The
@@ -150,8 +150,8 @@
150150
plt.show()
151151

152152
###############################################################################
153-
# This figure is a flattened map of the cortical surface. A number of regions
154-
# of interest (ROIs) have been labeled to ease interpretation. If you have
153+
# This figure is a flattened map of the cortical surface. A number of regions of
154+
# interest (ROIs) have been labeled to ease interpretation. If you have
155155
# never seen such a flatmap, we recommend taking a look at a `pycortex brain
156156
# viewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays
157157
# the brain in 3D. In this viewer, press "I" to inflate the brain, "F" to
@@ -198,6 +198,8 @@
198198
cortex.db = cortex.database.db
199199
cortex.utils.db = cortex.database.db
200200
cortex.dataset.braindata.db = cortex.database.db
201+
cortex.quickflat.utils.db = cortex.database.db
202+
cortex.quickflat.composite.db = cortex.database.db
201203

202204
###############################################################################
203205
# Then, we load the "fsaverage" mapper. The mapper is a matrix of shape
@@ -215,12 +217,33 @@
215217
# projected data. This object can be used either in a ``pycortex`` interactive
216218
# 3D viewer, or in a ``matplotlib`` figure showing only the flatmap.
217219

218-
vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='inferno')
220+
vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='viridis')
219221

220222
###############################################################################
221-
# To start an interactive 3D viewer in the browser, use the following function:
222-
if False:
223-
cortex.webshow(vertex, open_browser=True)
223+
# To start an interactive 3D viewer in the browser, use the ``webshow``
224+
# function.
225+
226+
if True:
227+
cortex.webshow(vertex, open_browser=False, port=8050)
228+
229+
###############################################################################
230+
# If you are running the notebook on Colab, you need to tunnel the pycortex
231+
# application out of Colab. To do so, use the following cell to start a tunnel
232+
# with ``ngrok`` and to get an address where the pycortex viewer will be made
233+
# accessible.
234+
235+
if in_colab:
236+
from IPython import get_ipython
237+
get_ipython().system_raw('./ngrok http 8050 &')
238+
239+
command = """
240+
curl -s http://localhost:4040/api/tunnels | python3 -c \
241+
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
242+
"""
243+
result = get_ipython().getoutput(command, split=True)
244+
print("Use the following address to connect to the brain viewer:\n"
245+
f"{result}\n"
246+
"and not the one proposed by pycortex ('Open viewer: ...')\n")
224247

225248
###############################################################################
226249
# Alternatively, to plot a flatmap in a ``matplotlib`` figure, use the
@@ -231,9 +254,10 @@
231254

232255
from cortex.testing_utils import has_installed
233256

234-
if has_installed("inkscape"):
235-
fig = cortex.quickshow(vertex, colorbar_location='right')
236-
plt.show()
257+
258+
fig = cortex.quickshow(vertex, colorbar_location='right',
259+
with_rois=has_installed("inkscape"))
260+
plt.show()
237261

238262

239263
###############################################################################

tutorials/notebooks/movies_3T/00_load_colab.ipynb renamed to tutorials/notebooks/movies_3T/00_setup_colab.ipynb

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,14 @@
5151
},
5252
"outputs": [],
5353
"source": [
54-
"# ![ -f \"vim-5-for-ccn.tar.gz\" ] || gdown --id 1b0I0Ytj06m6GCmfxfNrZuyF97fDo3NZb\n# ![ -d \"vim-5\" ] || tar xzf vim-5-for-ccn.tar.gz\n# ![ -d \"pycortex\" ] || git clone https://github.com/gallantlab/pycortex\n# !pip install -q voxelwise_tutorials"
54+
"# ![ -f \"vim-5-for-ccn.tar.gz\" ] || gdown --id 1b0I0Ytj06m6GCmfxfNrZuyF97fDo3NZb\n# ![ -d \"vim-5\" ] || tar xzf vim-5-for-ccn.tar.gz\n# ![ -d \"pycortex\" ] || git clone --quiet https://github.com/gallantlab/pycortex\n# !apt-get install -qq inkscape > /dev/null\n# !pip install -q voxelwise_tutorials\n# ![ -f \"ngrok-stable-linux-amd64.zip\" ] || wget -q https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip\n# ![ -f \"ngrok\" ] || unzip ngrok-stable-linux-amd64.zip"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"For the record, here is what each command does:\n\n- Download the dataset archive\n- Extract the dataset archive\n- Clone Pycortex to fix some filestore issues with Colab\n- Install Inkscape, to use more features from Pycortex\n- Install the tutorial helper package, and all the required dependencies\n- Download ngrok to create a tunnel for pycortex 3D brain viewer\n- Extract the ngrok archive\n\n"
5562
]
5663
},
5764
{

tutorials/notebooks/movies_3T/01_plot_explainable_variance.ipynb

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of\nbrain responses over repeats of the same stimulus or experiment. The variance\nof the estimated stimulus-dependent signal, which we call the explainable\nvariance, is proportional to the maximum prediction accuracy that can be\nobtained by a voxelwise encoding model in the test set.\n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in\na voxel for each of the $N$ repetitions of the same stimulus and\n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries\nbetween brain response and average brain response as $r_i = y_i - \\bar{y}$.\nThe explainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable\nvariance is also known as the *signal power*. For more information, see these\nreferences [1]_ [2]_ [3]_.\n"
18+
"\n# Compute the explainable variance\n\nBefore fitting any voxelwise model to fMRI responses, it is good practice to\nquantify the amount of signal in the test set that can be predicted by an\nencoding model. This quantity is called the *explainable variance*.\n\nThe measured signal can be decomposed into a sum of two components: the\nstimulus-dependent signal and noise. If we present the same stimulus multiple\ntimes and we record brain activity for each repetition, the stimulus-dependent\nsignal will be the same across repetitions while the noise will vary across\nrepetitions. In voxelwise modeling, the features used to model brain activity\nare the same for each repetition of the stimulus. Thus, encoding models will\npredict only the repeatable stimulus-dependent signal.\n\nThe stimulus-dependent signal can be estimated by taking the mean of \nbrain responses over repeats of the same stimulus or experiment. The variance\nof the estimated stimulus-dependent signal, which we call the explainable\nvariance, is proportional to the maximum prediction accuracy that can be\nobtained by a voxelwise encoding model in the test set. \n\nMathematically, let $y_i, i = 1 \\dots N$ be the measured signal in\na voxel for each of the $N$ repetitions of the same stimulus and \n$\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i$ the average brain response\nacross repetitions. For each repeat, we define the residual timeseries\nbetween brain response and average brain response as $r_i = y_i - \\bar{y}$.\nThe explainable variance (EV) is estimated as\n\n\\begin{align}\\text{EV} = \\frac{1}{N}\\sum_{i=1}^N\\text{Var}(y_i) - \\frac{N}{N-1}\\sum_{i=1}^N\\text{Var}(r_i)\\end{align}\n\n\nIn the literature, the explainable\nvariance is also known as the *signal power*. For more information, see these\nreferences [1]_ [2]_ [3]_.\n"
1919
]
2020
},
2121
{
@@ -170,7 +170,7 @@
170170
"cell_type": "markdown",
171171
"metadata": {},
172172
"source": [
173-
"We see that many voxels have low explainable variance. This is\nexpected, since many voxels are not driven by a visual stimulus, and their\nresponse changes over repeats of the same stimulus.\nWe also see that some voxels have high explainable variance (around 0.7). The\nresponses in these voxels are highly consistent across repetitions of the\nsame stimulus. Thus, they are good targets for encoding models.\n\n"
173+
"We see that many voxels have low explainable variance. This is \nexpected, since many voxels are not driven by a visual stimulus, and their\nresponse changes over repeats of the same stimulus.\nWe also see that some voxels have high explainable variance (around 0.7). The\nresponses in these voxels are highly consistent across repetitions of the\nsame stimulus. Thus, they are good targets for encoding models.\n\n"
174174
]
175175
},
176176
{
@@ -195,7 +195,7 @@
195195
"cell_type": "markdown",
196196
"metadata": {},
197197
"source": [
198-
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
198+
"This figure is a flattened map of the cortical surface. A number of regions of\ninterest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
199199
]
200200
},
201201
{
@@ -238,7 +238,7 @@
238238
},
239239
"outputs": [],
240240
"source": [
241-
"try:\n import google.colab # noqa\n in_colab = True\nexcept ImportError:\n in_colab = False\nprint(in_colab)\n\nif in_colab:\n filestore = cortex.options.config['basic']['filestore']\n cortex.database.db = cortex.database.Database(filestore)\n cortex.db = cortex.database.db\n cortex.utils.db = cortex.database.db\n cortex.dataset.braindata.db = cortex.database.db"
241+
"try:\n import google.colab # noqa\n in_colab = True\nexcept ImportError:\n in_colab = False\nprint(in_colab)\n\nif in_colab:\n filestore = cortex.options.config['basic']['filestore']\n cortex.database.db = cortex.database.Database(filestore)\n cortex.db = cortex.database.db\n cortex.utils.db = cortex.database.db\n cortex.dataset.braindata.db = cortex.database.db\n cortex.quickflat.utils.db = cortex.database.db\n cortex.quickflat.composite.db = cortex.database.db"
242242
]
243243
},
244244
{
@@ -274,14 +274,14 @@
274274
},
275275
"outputs": [],
276276
"source": [
277-
"vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='inferno')"
277+
"vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='viridis')"
278278
]
279279
},
280280
{
281281
"cell_type": "markdown",
282282
"metadata": {},
283283
"source": [
284-
"To start an interactive 3D viewer in the browser, use the following function:\n\n"
284+
"To start an interactive 3D viewer in the browser, use the ``webshow``\nfunction.\n\n"
285285
]
286286
},
287287
{
@@ -292,7 +292,25 @@
292292
},
293293
"outputs": [],
294294
"source": [
295-
"if False:\n cortex.webshow(vertex, open_browser=True)"
295+
"if True:\n cortex.webshow(vertex, open_browser=False, port=8050)"
296+
]
297+
},
298+
{
299+
"cell_type": "markdown",
300+
"metadata": {},
301+
"source": [
302+
"If you are running the notebook on Colab, you need to tunnel the pycortex\napplication out of Colab. To do so, use the following cell to start a tunnel\nwith ``ngrok`` and to get an address where the pycortex viewer will be made\naccessible.\n\n"
303+
]
304+
},
305+
{
306+
"cell_type": "code",
307+
"execution_count": null,
308+
"metadata": {
309+
"collapsed": false
310+
},
311+
"outputs": [],
312+
"source": [
313+
"if in_colab:\n from IPython import get_ipython\n get_ipython().system_raw('./ngrok http 8050 &')\n\n command = \"\"\"\n curl -s http://localhost:4040/api/tunnels | python3 -c \\\n \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])\"\n \"\"\"\n result = get_ipython().getoutput(command, split=True)\n print(\"Use the following address to connect to the brain viewer:\\n\"\n f\"{result}\\n\"\n \"and not the one proposed by pycortex ('Open viewer: ...')\\n\")"
296314
]
297315
},
298316
{
@@ -310,7 +328,7 @@
310328
},
311329
"outputs": [],
312330
"source": [
313-
"from cortex.testing_utils import has_installed\n\nif has_installed(\"inkscape\"):\n fig = cortex.quickshow(vertex, colorbar_location='right')\n plt.show()"
331+
"from cortex.testing_utils import has_installed\n\n\nfig = cortex.quickshow(vertex, colorbar_location='right',\n with_rois=has_installed(\"inkscape\"))\nplt.show()"
314332
]
315333
},
316334
{

0 commit comments

Comments
 (0)