Skip to content

Commit 930d497

Browse files
committed
ENH add downloader from GIN/Wasabi, change name to shortclips/vim-2
1 parent 7e7c57a commit 930d497

44 files changed

Lines changed: 392 additions & 317 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

README.rst

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,10 @@ To explore these tutorials, one can:
2222
- run the Python scripts (`tutorials <tutorials>`_ directory)
2323
- run the Jupyter notebooks (`tutorials/notebooks <tutorials/notebooks>`_ directory)
2424
- run the merged notebook in
25-
`Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/movies/merged_for_colab.ipynb>`_.
25+
`Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/shortclips/merged_for_colab.ipynb>`_.
2626

27-
The tutorials are best explored in order, starting with the "Movies" tutorial.
27+
The tutorials are best explored in order, starting with the "Shortclips"
28+
tutorial.
2829

2930
Helper Python package
3031
=====================
@@ -72,7 +73,8 @@ The package ``voxelwise_tutorials`` has the following dependencies:
7273
`nltk <https://github.com/nltk/nltk>`_,
7374
`pycortex <https://github.com/gallantlab/pycortex>`_,
7475
`himalaya <https://github.com/gallantlab/himalaya>`_,
75-
`pymoten <https://github.com/gallantlab/pymoten>`_.
76+
`pymoten <https://github.com/gallantlab/pymoten>`_,
77+
`datalad <https://github.com/datalad/datalad>`_.
7678

7779

7880
.. |Github| image:: https://img.shields.io/badge/github-voxelwise_tutorials-blue

doc/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ notebooks:
3333
python create_notebooks.py
3434
$(MAKE) merge-notebooks
3535

36-
NBDIR = ../tutorials/notebooks/movies
36+
NBDIR = ../tutorials/notebooks/shortclips
3737

3838
merge-notebooks:
3939
python merge_notebooks.py \

doc/index.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,10 @@ To explore these tutorials, one can:
2020
<https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials/notebooks>`_
2121
directory)
2222
- run the merged notebook in
23-
`Google Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/movies/merged_for_colab.ipynb>`_
23+
`Google Colab <https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/shortclips/merged_for_colab.ipynb>`_
2424

25-
The tutorials are best explored in order, starting with the `Movies tutorial
26-
<_auto_examples/index.html>`_.
25+
The tutorials are best explored in order, starting with the `Shortclips
26+
tutorial <_auto_examples/index.html>`_.
2727

2828
The project is available on GitHub at `gallantlab/voxelwise_tutorials
2929
<https://github.com/gallantlab/voxelwise_tutorials>`_. On top of the tutorials
@@ -59,6 +59,6 @@ If you use one of our packages in your work (``voxelwise_tutorials``
5959
:ref:`[15]<gao2015>`, or ``pymoten`` :ref:`[16]<nun2021>`), please cite the
6060
corresponding publications.
6161

62-
If you use one of our public datasets in your work (Movie 4T
63-
:ref:`[3b]<den2022>`, Movie 3T :ref:`[4b]<hut2012data>`), please cite the
62+
If you use one of our public datasets in your work (vim-2
63+
:ref:`[3b]<nis2011data>`, shortclips :ref:`[4b]<hut2012data>`), please cite the
6464
corresponding publications.

doc/static/crcns.png

-2.86 KB
Binary file not shown.

doc/static/download.png

722 Bytes
Loading

doc/voxelwise_modeling.rst

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ analysis:
2323
different stimulus and task features simultaneously. This framework enables
2424
the analysis of complex naturalistic stimuli and tasks which contain a
2525
large number of features; for example, VM has been used with naturalistic
26-
images :ref:`[1]<kay2008>` :ref:`[2]<nas2009>`, movies :ref:`[3]<nis2011>`,
27-
and stories :ref:`[8]<hut2016>`.
26+
images :ref:`[1]<kay2008>` :ref:`[2]<nas2009>`, shortclips
27+
:ref:`[3]<nis2011>`, and stories :ref:`[8]<hut2016>`.
2828

2929
#.
3030
Unlike the traditional null hypothesis testing framework, VM is not prone
@@ -141,9 +141,8 @@ Datasets
141141

142142
.. _hut2012data:
143143

144-
[4b] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2020):
145-
Gallant Lab Natural Movie 3T fMRI Data. CRCNS.org.
146-
http://dx.doi.org/10.6080/TBD
144+
[4b] Huth, A. G., Nishimoto, S., Vu, A. T., Dupre la Tour, T., & Gallant, J. L. (2022).
145+
Gallant Lab Natural Short Clips 3T fMRI Data. http://dx.doi.org/--TBD--
147146

148147
Packages
149148
--------

setup.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,10 @@
2323
"matplotlib",
2424
"networkx",
2525
"nltk",
26-
"pycortex",
26+
"pycortex>=1.2.4",
2727
"himalaya",
2828
"pymoten",
29+
"datalad",
2930
]
3031

3132
extras_require = {

tutorials/movies/00_download_vim5.py

Lines changed: 0 additions & 99 deletions
This file was deleted.

tutorials/notebooks/movies/00_download_vim5.ipynb

Lines changed: 0 additions & 108 deletions
This file was deleted.
Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {
7+
"collapsed": false
8+
},
9+
"outputs": [],
10+
"source": [
11+
"%matplotlib inline"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"\n# Download the data set\n\nIn this script, we download the data set from Wasabi or GIN. No account is\nrequired.\n\n## Cite this data set\n\nThis tutorial is based on publicly available data `published on GIN\n<https://gin.g-node.org/gallantlab/shortclips>`_. If you publish any work using\nthis data set, please cite the original publication [1]_, and the data set\n[2]_.\n"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {
25+
"collapsed": false
26+
},
27+
"outputs": [],
28+
"source": [
29+
""
30+
]
31+
},
32+
{
33+
"cell_type": "markdown",
34+
"metadata": {},
35+
"source": [
36+
"## Download\n\n"
37+
]
38+
},
39+
{
40+
"cell_type": "code",
41+
"execution_count": null,
42+
"metadata": {
43+
"collapsed": false
44+
},
45+
"outputs": [],
46+
"source": [
47+
"# path of the data directory\nfrom voxelwise_tutorials.io import get_data_home\ndirectory = get_data_home(dataset=\"shortclips\")\nprint(directory)"
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"metadata": {},
53+
"source": [
54+
"We will only use the first subject in this tutorial, but you can run the same\nanalysis on the four other subjects. Uncomment the lines in ``DATAFILES`` to\ndownload more subjects.\n\nWe also skip the stimuli files, since the dataset provides two preprocessed\nfeature spaces to perform voxelwise modeling without requiring the original\nstimuli.\n\n"
55+
]
56+
},
57+
{
58+
"cell_type": "code",
59+
"execution_count": null,
60+
"metadata": {
61+
"collapsed": false
62+
},
63+
"outputs": [],
64+
"source": [
65+
"from voxelwise_tutorials.io import download_datalad\n\nDATAFILES = [\n \"features/motion_energy.hdf\",\n \"features/wordnet.hdf\",\n \"mappers/S01_mappers.hdf\",\n # \"mappers/S02_mappers.hdf\",\n # \"mappers/S03_mappers.hdf\",\n # \"mappers/S04_mappers.hdf\",\n # \"mappers/S05_mappers.hdf\",\n \"responses/S01_responses.hdf\",\n # \"responses/S02_responses.hdf\",\n # \"responses/S03_responses.hdf\",\n # \"responses/S04_responses.hdf\",\n # \"responses/S05_responses.hdf\",\n # \"stimuli/test.hdf\",\n # \"stimuli/train_00.hdf\",\n # \"stimuli/train_01.hdf\",\n # \"stimuli/train_02.hdf\",\n # \"stimuli/train_03.hdf\",\n # \"stimuli/train_04.hdf\",\n # \"stimuli/train_05.hdf\",\n # \"stimuli/train_06.hdf\",\n # \"stimuli/train_07.hdf\",\n # \"stimuli/train_08.hdf\",\n # \"stimuli/train_09.hdf\",\n # \"stimuli/train_10.hdf\",\n # \"stimuli/train_11.hdf\",\n]\n\nsource = \"https://gin.g-node.org/gallantlab/shortclips\"\n\nfor datafile in DATAFILES:\n local_filename = download_datalad(datafile, destination=directory,\n source=source)"
66+
]
67+
},
68+
{
69+
"cell_type": "markdown",
70+
"metadata": {},
71+
"source": [
72+
"## References\n\n.. [1] Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A\n continuous semantic space describes the representation of thousands of\n object and action categories across the human brain. Neuron, 76(6),\n 1210-1224.\n\n.. [2] Huth, A. G., Nishimoto, S., Vu, A. T., Dupre la Tour, T., & Gallant, J. L. (2022).\n Gallant Lab Natural Short Clips 3T fMRI Data. http://dx.doi.org/--TBD--\n\n"
73+
]
74+
}
75+
],
76+
"metadata": {
77+
"kernelspec": {
78+
"display_name": "Python 3",
79+
"language": "python",
80+
"name": "python3"
81+
},
82+
"language_info": {
83+
"codemirror_mode": {
84+
"name": "ipython",
85+
"version": 3
86+
},
87+
"file_extension": ".py",
88+
"mimetype": "text/x-python",
89+
"name": "python",
90+
"nbconvert_exporter": "python",
91+
"pygments_lexer": "ipython3",
92+
"version": "3.8.0"
93+
}
94+
},
95+
"nbformat": 4,
96+
"nbformat_minor": 0
97+
}

0 commit comments

Comments
 (0)