Skip to content

Commit abef728

Browse files
committed
DOC improve narrative in examples
1 parent 3fac121 commit abef728

6 files changed

Lines changed: 371 additions & 166 deletions

File tree

.gitattributes

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
*.ipynb -diff

doc/index.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ This website contains tutorials describing how to use the
1212
`voxelwise modeling framework <voxelwise_modeling.html>`_.
1313

1414
The tutorials consist of Python scripts, which are rendered in a `gallery of
15-
examples <auto_examples/index.html>`_. Each Python script is also available as a
16-
``Jupyter`` notebook (non-rendered). The tutorials are best explored in order,
17-
starting with the "Movies 3T tutorial".
15+
examples <auto_examples/index.html>`_. Each Python script is also available as
16+
a ``jupyter`` notebook (non-rendered). The tutorials are best explored in
17+
order, starting with the "Movies 3T tutorial".
1818

1919
To run the tutorials yourself, we recommend to download the project on GitHub
2020
at `gallantlab/voxelwise_tutorials

tutorials/movies_3T/01_plot_explainable_variance.py

Lines changed: 64 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -3,43 +3,44 @@
33
Compute the explainable variance
44
================================
55
6-
Before fitting voxelwise models to the fMRI responses, we can compute the
7-
explainable variance on the test set repeats.
8-
9-
The explainable variance is the part of the fMRI responses that can be
10-
explained by voxelwise modeling. It is thus the upper bound of voxelwise
11-
modeling performances.
12-
13-
Indeed, we can decompose the signal into a sum of two components, one
14-
component that is repeated if we repeat the same experiment, and one component
15-
that changes for each repeat. Because voxelwise modeling would use the same
16-
features for each repeat, it can only model the component that is common to
17-
all repeats. This shared component can be estimated by taking the mean over
18-
repeats of the same experiment.
6+
Before fitting voxelwise models to the fMRI responses, we can estimate the
7+
*explainable variance*. The explainable variance is the part of the fMRI
8+
responses that can be explained by the voxelwise modeling framework.
9+
10+
Indeed, we can decompose the signal into a sum of two components, one component
11+
that is repeated if we repeat the same experiment, and one component that
12+
changes for each repeat. Because voxelwise modeling would use the same features
13+
for each repeat, it can only model the component that is common to all repeats.
14+
This shared component can be estimated by taking the mean over repeats of the
15+
same experiment. The variance of this shared component, that we call the
16+
explainable variance, is the upper bound of the voxelwise modeling
17+
performances.
1918
"""
2019
# sphinx_gallery_thumbnail_number = 2
2120
###############################################################################
22-
23-
# path of the data directory
21+
# Path of the data directory
2422
import os
2523
from voxelwise_tutorials.io import get_data_home
2624
directory = os.path.join(get_data_home(), "vim-4")
2725
print(directory)
2826

27+
###############################################################################
28+
2929
# modify to use another subject
3030
subject = "S01"
3131

3232
###############################################################################
3333
# Compute the explainable variance
3434
# --------------------------------
3535
import numpy as np
36-
3736
from voxelwise_tutorials.io import load_hdf5_array
3837

3938
###############################################################################
40-
# First, we load the fMRI responses on the test set, which contains 10 repeats.
39+
# First, we load the fMRI responses on the test set, which contains ten (10)
40+
# repeats.
4141
file_name = os.path.join(directory, 'responses', f'{subject}_responses.hdf')
4242
Y_test = load_hdf5_array(file_name, key="Y_test")
43+
print("(n_repeats, n_samples_test, n_voxels) =", Y_test.shape)
4344

4445
###############################################################################
4546
# Then, we compute the explainable variance per voxel.
@@ -48,10 +49,11 @@
4849
# taking the variance of the average response. Then, we compute the
4950
# explainable variance by dividing these two quantities.
5051
# Finally, a correction can be applied to account for small numbers of repeat
51-
# (parameter ``bias_correction``).
52+
# (through the parameter ``bias_correction``).
5253

5354
from voxelwise_tutorials.utils import explainable_variance
5455
ev = explainable_variance(Y_test, bias_correction=False)
56+
print("(n_voxels,) =", ev.shape)
5557

5658
###############################################################################
5759
# We can plot the distribution of explainable variance over voxels.
@@ -68,30 +70,34 @@
6870
###############################################################################
6971
# We see that most voxels have a rather low explainable variance, around 0.1
7072
# (when not using the bias correction). This is expected, since most voxels are
71-
# not directly driven by a visual stimulus.
72-
# We also see that some voxels reach an explainable variance of 0.7, which is
73-
# quite high. It means that these voxels consistently record the same activity
74-
# across a repeated stimulus, and thus are good targets for encoding models.
73+
# not directly driven by a visual stimulus, and their activity change over
74+
# repeats. We also see that some voxels reach an explainable variance of 0.7,
75+
# which is quite high. It means that these voxels consistently record the same
76+
# activity across a repeated stimulus, and thus are good targets for encoding
77+
# models.
7578

7679
###############################################################################
7780
# Map to subject flatmap
7881
# ----------------------
7982
#
8083
# To better understand the distribution of explainable variance, we map the
81-
# values to the subject brain. This can be done with
82-
# `pycortex <https://gallantlab.github.io/pycortex/>`_, which can create
83-
# interactive 3D viewers displayed in any modern browser.
84-
# ``Pycortex`` can also display flatten maps of the cortical surface, to
85-
# visualize the entire cortical surface at once.
84+
# values to the subject brain. This can be done with `pycortex
85+
# <https://gallantlab.github.io/pycortex/>`_, which can create interactive 3D
86+
# viewers to be displayed in any modern browser. ``pycortex`` can also display
87+
# flatten maps of the cortical surface, to visualize the entire cortical
88+
# surface at once.
8689
#
8790
# Here, we do not share the anatomical information of the subjects for privacy
88-
# concerns. Instead, we provide two mappers, (i) to map the voxels to a
89-
# subject-specific flatmap, or (ii) to map the voxels to the Freesurfer average
90-
# cortical surface ("fsaverage").
91+
# concerns. Instead, we provide two mappers:
9192
#
92-
# The first mapper is a sparse CSR matrix that map each voxel to a set of pixel
93-
# in a flatmap. To ease its use, we provide here an example function
94-
# ``plot_flatmap_from_mapper``.
93+
# - to map the voxels to a (subject-specific) flatmap
94+
# - to map the voxels to the Freesurfer average cortical surface ("fsaverage")
95+
#
96+
# The first mapper is 2D matrix of shape (n_pixels, n_voxels), that map each
97+
# voxel to a set of pixel in a flatmap. The matrix is efficient stored using a
98+
# ``scipy`` sparse CSR matrix format. To ease the use of this mapper, we
99+
# provide an example function ``plot_flatmap_from_mapper``. This function mimic
100+
# the behavior of ``pycortex.quickshow``.
95101

96102
from voxelwise_tutorials.viz import plot_flatmap_from_mapper
97103

@@ -100,17 +106,28 @@
100106
plt.show()
101107

102108
###############################################################################
103-
# We can see that the explainable variance is mainly located in the visual
104-
# cortex, in early regions like V1, V2, V3, or in higher-level regions like
105-
# EBA, FFA or IPS. This was expected since this is a purely visual experiment.
109+
# This figure is a flatten map of the cortical surface. A number of regions of
110+
# interest (ROIs) have been labeled to ease the interpretation. If you have
111+
# never seen such a flatmap, we recommend taking a look at a `pycortex brain
112+
# viewer <https://gallantlab.org/huth2016/>`_, which displays the brain in 3D.
113+
# In this viewer, press "I" to inflate the brain, "F" to flatten the surface,
114+
# and "R" to reset the view (or use the ``surface/unfold`` cursor on the right
115+
# menu). This viewer should help you understand the correspondance between the
116+
# flatten and the folded cortical surface of the brain.
117+
118+
###############################################################################
119+
# On this flatmap, we can see that the explainable variance is mainly located
120+
# in the visual cortex, in early visual regions like V1, V2, V3, or in
121+
# higher-level regions like EBA, FFA or IPS. This was expected since this is a
122+
# purely visual experiment.
106123

107124
###############################################################################
108-
# Map to fsaverage
109-
# ----------------
125+
# Map to "fsaverage"
126+
# ------------------
110127
#
111128
# The second mapper we provide maps the voxel data to a Freesurfer
112129
# average surface ("fsaverage"), that can be used in ``pycortex``.
113-
# First, let's download the fsaverage surface if it does not exist
130+
# First, let's download the "fsaverage" surface.
114131

115132
import cortex
116133

@@ -120,18 +137,20 @@
120137
cortex.utils.download_subject(subject_id=surface)
121138

122139
###############################################################################
123-
# Then, we load the fsaverage mapper. The mapper is a sparse CSR matrix, which
124-
# map each voxel to some vertices in the fsaverage surface.
125-
# The mapper is applied with a dot product ``@``.
140+
# Then, we load the "fsaverage" mapper. The mapper is a matrix of shape
141+
# (n_vertices, n_voxels), which map each voxel to some vertices in the
142+
# fsaverage surface. It is also stored with a sparse CSR matrix format. The
143+
# mapper is applied with a dot product ``@`` (equivalent to ``np.dot``).
126144
from voxelwise_tutorials.io import load_hdf5_sparse_array
127145
voxel_to_fsaverage = load_hdf5_sparse_array(mapper_file,
128146
key='voxel_to_fsaverage')
129147
ev_projected = voxel_to_fsaverage @ ev
148+
print("(n_vertices,) =", ev_projected.shape)
130149

131150
###############################################################################
132-
# We can then create a ``Vertex`` object with the projected data.
133-
# This object can be used either in a ``pycortex`` interactive 3D viewer, or
134-
# in a ``matplotlib`` figure showing directly the flatmap.
151+
# We can then create a ``Vertex`` object in ``pycortex``, containing the
152+
# projected data. This object can be used either in a ``pycortex`` interactive
153+
# 3D viewer, or in a ``matplotlib`` figure showing only the flatmap.
135154

136155
vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='inferno')
137156

0 commit comments

Comments
 (0)