Skip to content

Commit d967b95

Browse files
committed
MNT copy the tutorial README files to the notebook directories
1 parent 4ccea23 commit d967b95

3 files changed

Lines changed: 116 additions & 3 deletions

File tree

doc/create_notebooks.py

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,26 @@
1-
21
if __name__ == "__main__":
32
import os
43
import shutil
54

6-
archive_name = "_auto_examples/_auto_examples_jupyter.zip"
5+
# check if `archive_name` exists
6+
archive_name = os.path.join('_auto_examples', '_auto_examples_jupyter.zip')
77
if not os.path.exists(archive_name):
88
raise RuntimeError(
99
f"{archive_name} does not exist, please run `make html` first.")
1010

11-
extract_dir = "../tutorials/notebooks/"
11+
# Unpack `archive_name`
12+
extract_dir = os.path.join('..', 'tutorials', 'notebooks')
1213
shutil.unpack_archive(archive_name, extract_dir=extract_dir)
1314
print(f'Extracted {archive_name} to {extract_dir}')
15+
16+
# copy the README.rst files
17+
tutorial_dir = os.path.join('..', 'tutorials')
18+
for file_or_dir in os.listdir(tutorial_dir):
19+
if os.path.isdir(os.path.join(tutorial_dir, file_or_dir)):
20+
if file_or_dir == "notebooks":
21+
continue
22+
source = os.path.join(tutorial_dir, file_or_dir, 'README.rst')
23+
destination = os.path.join(tutorial_dir, 'notebooks', file_or_dir,
24+
'README.rst')
25+
shutil.copyfile(source, destination)
26+
print(f'Copied {source} to {destination}')
Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
Movies 3T tutorial
2+
==================
3+
4+
This tutorial describes how to perform voxelwise modeling on a visual
5+
imaging experiment.
6+
7+
**Data set:**
8+
This tutorial is based on publicly available data
9+
`published on CRCNS <TBD>`_ [4]_.
10+
The data is briefly described in the dataset `description PDF <TBD>`_,
11+
and in more details in the original publication [1]_.
12+
If you publish work using this data set, please cite the original
13+
publication [1]_, and the CRCNS data set [4]_.
14+
15+
**Models:**
16+
This tutorial implements different voxelwise models:
17+
18+
- a ridge model with wordnet semantic features as described in [1]_.
19+
- a ridge model with motion-energy features as described in [2]_.
20+
- a banded-ridge model with both feature spaces as described in [3]_.
21+
22+
**Scikit-learn API:**
23+
These tutorials use ``scikit-learn`` to define the preprocessing steps, the
24+
modeling pipeline, and the cross-validation scheme. If you are not familiar
25+
with scikit-learn API, we recommend the `getting started guide
26+
<https://scikit-learn.org/stable/getting_started.html>`_. We also use a lot of
27+
the scikit-learn terminology, which is explained in great details in the
28+
`glossary of common terms and API elements
29+
<https://scikit-learn.org/stable/glossary.html#glossary>`_.
30+
31+
**Running time:**
32+
Most of these tutorials can be run in a very reasonable time (under 1 minute
33+
for most examples, ~7 minutes for the banded ridge example) with a GPU backend
34+
in `himalaya <https://github.com/gallantlab/himalaya>`_. Using a CPU backend is
35+
usually slower (typically 10 times slower).
36+
37+
**Requirements:**
38+
This tutorial requires the following Python packages:
39+
40+
- voxelwise_tutorials (this repository) and its dependencies
41+
- cupy or pytorch (optional, to use a GPU backend in himalaya)
42+
43+
**References:**
44+
45+
.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
46+
A continuous semantic space describes the representation of thousands of
47+
object and action categories across the human brain. Neuron, 76(6),
48+
1210-1224.
49+
50+
.. [2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
51+
B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain
52+
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
53+
54+
.. [3] Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019).
55+
Voxelwise encoding models with non-spherical multivariate normal priors.
56+
Neuroimage, 197, 482-492.
57+
58+
.. [4] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):
59+
Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.
60+
http://dx.doi.org/10.6080/TBD
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
|
2+
|
3+
4+
Movies 4T tutorial
5+
==================
6+
7+
This tutorial describes how to perform voxelwise modeling on a visual
8+
imaging experiment.
9+
10+
**Data set:**
11+
This tutorial is based on publicly available data published on
12+
`CRCNS <https://crcns.org/data-sets/vc/vim-2/about-vim-2>`_ [6]_.
13+
The data is briefly described in the dataset description
14+
`PDF <https://crcns.org/files/data/vim-2/crcns-vim-2-data-description.pdf>`_,
15+
and in more details in the original publication [5]_.
16+
If you publish work using this data set, please cite the original
17+
publication [5]_, and the CRCNS data set [6]_.
18+
19+
.. Note::
20+
This tutorial is redundant with the "Movies 3T tutorial". It uses a
21+
different data set, with brain responses limited to the occipital lobe,
22+
and with no mappers to plot the data on flatmaps.
23+
Using the "Movies 3T tutorial" with full brain responses is recommended.
24+
25+
26+
**Requirements:**
27+
This tutorial requires the following Python packages:
28+
29+
- voxelwise_tutorials (this repository) and its dependencies
30+
- cupy or pytorch (optional, to use a GPU backend in himalaya)
31+
32+
**References:**
33+
34+
.. [5] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
35+
B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain
36+
activity evoked by natural movies. Current Biology, 21(19), 1641-1646.
37+
38+
.. [6] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu,
39+
B., & Gallant, J. L. (2014): Gallant Lab Natural Movie 4T fMRI Data.
40+
CRCNS.org. http://dx.doi.org/10.6080/K00Z715X

0 commit comments

Comments
 (0)