|
15 | 15 | are the same for each repetition of the stimulus. Thus, encoding models will |
16 | 16 | predict only the repeatable stimulus-dependent signal. |
17 | 17 |
|
18 | | -The stimulus-dependent signal can be estimated by taking the mean of |
| 18 | +The stimulus-dependent signal can be estimated by taking the mean of |
19 | 19 | brain responses over repeats of the same stimulus or experiment. The variance |
20 | 20 | of the estimated stimulus-dependent signal, which we call the explainable |
21 | 21 | variance, is proportional to the maximum prediction accuracy that can be |
22 | | -obtained by a voxelwise encoding model in the test set. |
| 22 | +obtained by a voxelwise encoding model in the test set. |
23 | 23 |
|
24 | 24 | Mathematically, let :math:`y_i, i = 1 \\dots N` be the measured signal in |
25 | | -a voxel for each of the :math:`N` repetitions of the same stimulus and |
| 25 | +a voxel for each of the :math:`N` repetitions of the same stimulus and |
26 | 26 | :math:`\\bar{y} = \\frac{1}{N}\\sum_{i=1}^Ny_i` the average brain response |
27 | 27 | across repetitions. For each repeat, we define the residual timeseries |
28 | 28 | between brain response and average brain response as :math:`r_i = y_i - \\bar{y}`. |
|
114 | 114 | plt.show() |
115 | 115 |
|
116 | 116 | ############################################################################### |
117 | | -# We see that many voxels have low explainable variance. This is |
| 117 | +# We see that many voxels have low explainable variance. This is |
118 | 118 | # expected, since many voxels are not driven by a visual stimulus, and their |
119 | 119 | # response changes over repeats of the same stimulus. |
120 | 120 | # We also see that some voxels have high explainable variance (around 0.7). The |
|
150 | 150 | plt.show() |
151 | 151 |
|
152 | 152 | ############################################################################### |
153 | | -# This figure is a flattened map of the cortical surface. A number of regions |
154 | | -# of interest (ROIs) have been labeled to ease interpretation. If you have |
| 153 | +# This figure is a flattened map of the cortical surface. A number of regions of |
| 154 | +# interest (ROIs) have been labeled to ease interpretation. If you have |
155 | 155 | # never seen such a flatmap, we recommend taking a look at a `pycortex brain |
156 | 156 | # viewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays |
157 | 157 | # the brain in 3D. In this viewer, press "I" to inflate the brain, "F" to |
|
198 | 198 | cortex.db = cortex.database.db |
199 | 199 | cortex.utils.db = cortex.database.db |
200 | 200 | cortex.dataset.braindata.db = cortex.database.db |
| 201 | + cortex.quickflat.utils.db = cortex.database.db |
| 202 | + cortex.quickflat.composite.db = cortex.database.db |
201 | 203 |
|
202 | 204 | ############################################################################### |
203 | 205 | # Then, we load the "fsaverage" mapper. The mapper is a matrix of shape |
|
215 | 217 | # projected data. This object can be used either in a ``pycortex`` interactive |
216 | 218 | # 3D viewer, or in a ``matplotlib`` figure showing only the flatmap. |
217 | 219 |
|
218 | | -vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='inferno') |
| 220 | +vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='viridis') |
219 | 221 |
|
220 | 222 | ############################################################################### |
221 | | -# To start an interactive 3D viewer in the browser, use the following function: |
222 | | -if False: |
223 | | - cortex.webshow(vertex, open_browser=True) |
| 223 | +# To start an interactive 3D viewer in the browser, use the ``webshow`` |
| 224 | +# function. |
| 225 | + |
| 226 | +if True: |
| 227 | + cortex.webshow(vertex, open_browser=False, port=8050) |
| 228 | + |
| 229 | +############################################################################### |
| 230 | +# If you are running the notebook on Colab, you need to tunnel the pycortex |
| 231 | +# application out of Colab. To do so, use the following cell to start a tunnel |
| 232 | +# with ``ngrok`` and to get an address where the pycortex viewer will be made |
| 233 | +# accessible. |
| 234 | + |
| 235 | +if in_colab: |
| 236 | + from IPython import get_ipython |
| 237 | + get_ipython().system_raw('./ngrok http 8050 &') |
| 238 | + |
| 239 | + command = """ |
| 240 | + curl -s http://localhost:4040/api/tunnels | python3 -c \ |
| 241 | + "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])" |
| 242 | + """ |
| 243 | + result = get_ipython().getoutput(command, split=True) |
| 244 | + print("Use the following address to connect to the brain viewer:\n" |
| 245 | + f"{result}\n" |
| 246 | + "and not the one proposed by pycortex ('Open viewer: ...')\n") |
224 | 247 |
|
225 | 248 | ############################################################################### |
226 | 249 | # Alternatively, to plot a flatmap in a ``matplotlib`` figure, use the |
|
231 | 254 |
|
232 | 255 | from cortex.testing_utils import has_installed |
233 | 256 |
|
234 | | -if has_installed("inkscape"): |
235 | | - fig = cortex.quickshow(vertex, colorbar_location='right') |
236 | | - plt.show() |
| 257 | + |
| 258 | +fig = cortex.quickshow(vertex, colorbar_location='right', |
| 259 | + with_rois=has_installed("inkscape")) |
| 260 | +plt.show() |
237 | 261 |
|
238 | 262 |
|
239 | 263 | ############################################################################### |
|
0 commit comments