Skip to content

Commit 4d19004

Browse files
authored
Add codespell: workflow, config and fix typos it finds (#21)
* Add github action to codespell main on push and PRs * Add rudimentary codespell config * [DATALAD RUNCMD] Do interactive fixing of some ambigous typos === Do not change lines below === { "chain": [], "cmd": "codespell -w -i 3 -C 2 ./voxelwise_tutorials/viz.py", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^ * [DATALAD RUNCMD] run codespell throughout fixing typo automagically === Do not change lines below === { "chain": [], "cmd": "codespell -w", "exit": 0, "extra_inputs": [], "inputs": [], "outputs": [], "pwd": "." } ^^^ Do not change lines above ^^^
1 parent e5f31fb commit 4d19004

21 files changed

Lines changed: 62 additions & 35 deletions

.codespellrc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
[codespell]
2+
skip = .git,*.pdf,*.svg,*.css,.codespellrc
3+
check-hidden = true
4+
ignore-regex = ^\s*"image/\S+": ".*
5+
# ignore-words-list =

.github/workflows/codespell.yml

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
name: Codespell
3+
4+
on:
5+
push:
6+
branches: [main]
7+
pull_request:
8+
branches: [main]
9+
10+
permissions:
11+
contents: read
12+
13+
jobs:
14+
codespell:
15+
name: Check for spelling errors
16+
runs-on: ubuntu-latest
17+
18+
steps:
19+
- name: Checkout
20+
uses: actions/checkout@v3
21+
- name: Codespell
22+
uses: codespell-project/actions-codespell@v2

.github/workflows/deploy_pypi.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
- uses: actions/setup-python@v2
1919

2020
- name: Get versions
21-
# Compare the latest verion on PyPI, and the current version
21+
# Compare the latest version on PyPI, and the current version
2222
run: |
2323
python -m pip install --upgrade -q pip
2424
pip index versions voxelwise_tutorials

tutorials/notebooks/shortclips/01_plot_explainable_variance.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@
184184
"cell_type": "markdown",
185185
"metadata": {},
186186
"source": [
187-
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a [pycortex brain\nviewer](https://www.gallantlab.org/brainviewer/Deniz2019), which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
187+
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a [pycortex brain\nviewer](https://www.gallantlab.org/brainviewer/Deniz2019), which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondence between the flatten\nand the folded cortical surface of the brain.\n\n"
188188
]
189189
},
190190
{

tutorials/notebooks/shortclips/06_plot_banded_ridge_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -411,7 +411,7 @@
411411
"cell_type": "markdown",
412412
"metadata": {},
413413
"source": [
414-
"Here we plot the comparison of model prediction accuracies with a 2D\nhistogram. All 70k voxels are represented in this histogram, where the\ndiagonal corresponds to identical model prediction accuracy for both models.\nA distibution deviating from the diagonal means that one model has better\npredictive performance than the other.\n\n"
414+
"Here we plot the comparison of model prediction accuracies with a 2D\nhistogram. All 70k voxels are represented in this histogram, where the\ndiagonal corresponds to identical model prediction accuracy for both models.\nA distribution deviating from the diagonal means that one model has better\npredictive performance than the other.\n\n"
415415
]
416416
},
417417
{

tutorials/notebooks/shortclips/07_extract_motion_energy.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"cell_type": "markdown",
5252
"metadata": {},
5353
"source": [
54-
"## Compute the luminance\n\nThe motion energy is typically not computed on RGB (color) images,\nbut on the luminance channel of the LAB color space.\nTo avoid loading the entire simulus array in memory, we use batches of data.\nThese batches can be arbitray, since the luminance is computed independently\non each image.\n\n"
54+
"## Compute the luminance\n\nThe motion energy is typically not computed on RGB (color) images,\nbut on the luminance channel of the LAB color space.\nTo avoid loading the entire simulus array in memory, we use batches of data.\nThese batches can be arbitrary, since the luminance is computed independently\non each image.\n\n"
5555
]
5656
},
5757
{

tutorials/notebooks/shortclips/merged_for_colab.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -495,7 +495,7 @@
495495
"the brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\n",
496496
"flatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\n",
497497
"cursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\n",
498-
"This viewer should help you understand the correspondance between the flatten\n",
498+
"This viewer should help you understand the correspondence between the flatten\n",
499499
"and the folded cortical surface of the brain.\n",
500500
"\n"
501501
]
@@ -1318,7 +1318,7 @@
13181318
"include nouns (such as \"woman\", \"car\", or \"building\") and verbs (such as\n",
13191319
"\"talking\", \"touching\", or \"walking\"), for a total of 1705 distinct category\n",
13201320
"labels. To interpret our model, labels can be organized in a graph of semantic\n",
1321-
"relashionship based on the [Wordnet](https://wordnet.princeton.edu/) dataset.\n",
1321+
"relationship based on the [Wordnet](https://wordnet.princeton.edu/) dataset.\n",
13221322
"\n",
13231323
"*Summary:* We first concatenate the features with multiple temporal delays to\n",
13241324
"account for the slow hemodynamic response. We then use linear regression to fit\n",
@@ -1370,7 +1370,7 @@
13701370
"## Load the data\n",
13711371
"\n",
13721372
"We first load the fMRI responses. These responses have been preprocessed as\n",
1373-
"decribed in [1]_. The data is separated into a training set ``Y_train`` and a\n",
1373+
"described in [1]_. The data is separated into a training set ``Y_train`` and a\n",
13741374
"testing set ``Y_test``. The training set is used for fitting models, and\n",
13751375
"selecting the best models and hyperparameters. The test set is later used\n",
13761376
"to estimate the generalization performance of the selected model. The\n",
@@ -1830,7 +1830,7 @@
18301830
"metadata": {},
18311831
"source": [
18321832
"If we fit the model on GPU, scores are returned on GPU using an array object\n",
1833-
"specfic to the backend we used (such as a ``torch.Tensor``). Thus, we need to\n",
1833+
"specific to the backend we used (such as a ``torch.Tensor``). Thus, we need to\n",
18341834
"move them into ``numpy`` arrays on CPU, to be able to use them for example in\n",
18351835
"a ``matplotlib`` figure.\n",
18361836
"\n"
@@ -1976,7 +1976,7 @@
19761976
"address this issue, we rescale the regression coefficient to have a norm\n",
19771977
"equal to the square-root of the $R^2$ scores. We found empirically that\n",
19781978
"this rescaling best matches results obtained with a regularization shared\n",
1979-
"accross voxels. This rescaling also removes the need to select only best\n",
1979+
"across voxels. This rescaling also removes the need to select only best\n",
19801980
"performing voxels, because voxels with low prediction accuracies are rescaled\n",
19811981
"to have a low norm.\n",
19821982
"\n"
@@ -2749,7 +2749,7 @@
27492749
"Then, we plot the comparison of model prediction accuracies with a 2D\n",
27502750
"histogram. All ~70k voxels are represented in this histogram, where the\n",
27512751
"diagonal corresponds to identical prediction accuracy for both models. A\n",
2752-
"distibution deviating from the diagonal means that one model has better\n",
2752+
"distribution deviating from the diagonal means that one model has better\n",
27532753
"prediction accuracy than the other.\n",
27542754
"\n"
27552755
]
@@ -3285,7 +3285,7 @@
32853285
"We can also plot the comparison of model prediction accuracies with a 2D\n",
32863286
"histogram. All ~70k voxels are represented in this histogram, where the\n",
32873287
"diagonal corresponds to identical prediction accuracy for both models. A\n",
3288-
"distibution deviating from the diagonal means that one model has better\n",
3288+
"distribution deviating from the diagonal means that one model has better\n",
32893289
"predictive performance than the other.\n",
32903290
"\n"
32913291
]
@@ -4028,7 +4028,7 @@
40284028
"Here we plot the comparison of model prediction accuracies with a 2D\n",
40294029
"histogram. All 70k voxels are represented in this histogram, where the\n",
40304030
"diagonal corresponds to identical model prediction accuracy for both models.\n",
4031-
"A distibution deviating from the diagonal means that one model has better\n",
4031+
"A distribution deviating from the diagonal means that one model has better\n",
40324032
"predictive performance than the other.\n",
40334033
"\n"
40344034
]

tutorials/notebooks/shortclips/merged_for_colab_model_fitting.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -495,7 +495,7 @@
495495
"the brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\n",
496496
"flatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\n",
497497
"cursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\n",
498-
"This viewer should help you understand the correspondance between the flatten\n",
498+
"This viewer should help you understand the correspondence between the flatten\n",
499499
"and the folded cortical surface of the brain.\n",
500500
"\n"
501501
]
@@ -703,7 +703,7 @@
703703
"include nouns (such as \"woman\", \"car\", or \"building\") and verbs (such as\n",
704704
"\"talking\", \"touching\", or \"walking\"), for a total of 1705 distinct category\n",
705705
"labels. To interpret our model, labels can be organized in a graph of semantic\n",
706-
"relashionship based on the [Wordnet](https://wordnet.princeton.edu/) dataset.\n",
706+
"relationship based on the [Wordnet](https://wordnet.princeton.edu/) dataset.\n",
707707
"\n",
708708
"*Summary:* We first concatenate the features with multiple temporal delays to\n",
709709
"account for the slow hemodynamic response. We then use linear regression to fit\n",
@@ -755,7 +755,7 @@
755755
"## Load the data\n",
756756
"\n",
757757
"We first load the fMRI responses. These responses have been preprocessed as\n",
758-
"decribed in [1]_. The data is separated into a training set ``Y_train`` and a\n",
758+
"described in [1]_. The data is separated into a training set ``Y_train`` and a\n",
759759
"testing set ``Y_test``. The training set is used for fitting models, and\n",
760760
"selecting the best models and hyperparameters. The test set is later used\n",
761761
"to estimate the generalization performance of the selected model. The\n",
@@ -1215,7 +1215,7 @@
12151215
"metadata": {},
12161216
"source": [
12171217
"If we fit the model on GPU, scores are returned on GPU using an array object\n",
1218-
"specfic to the backend we used (such as a ``torch.Tensor``). Thus, we need to\n",
1218+
"specific to the backend we used (such as a ``torch.Tensor``). Thus, we need to\n",
12191219
"move them into ``numpy`` arrays on CPU, to be able to use them for example in\n",
12201220
"a ``matplotlib`` figure.\n",
12211221
"\n"
@@ -1361,7 +1361,7 @@
13611361
"address this issue, we rescale the regression coefficient to have a norm\n",
13621362
"equal to the square-root of the $R^2$ scores. We found empirically that\n",
13631363
"this rescaling best matches results obtained with a regularization shared\n",
1364-
"accross voxels. This rescaling also removes the need to select only best\n",
1364+
"across voxels. This rescaling also removes the need to select only best\n",
13651365
"performing voxels, because voxels with low prediction accuracies are rescaled\n",
13661366
"to have a low norm.\n",
13671367
"\n"
@@ -2086,7 +2086,7 @@
20862086
"We can also plot the comparison of model prediction accuracies with a 2D\n",
20872087
"histogram. All ~70k voxels are represented in this histogram, where the\n",
20882088
"diagonal corresponds to identical prediction accuracy for both models. A\n",
2089-
"distibution deviating from the diagonal means that one model has better\n",
2089+
"distribution deviating from the diagonal means that one model has better\n",
20902090
"predictive performance than the other.\n",
20912091
"\n"
20922092
]
@@ -2829,7 +2829,7 @@
28292829
"Here we plot the comparison of model prediction accuracies with a 2D\n",
28302830
"histogram. All 70k voxels are represented in this histogram, where the\n",
28312831
"diagonal corresponds to identical model prediction accuracy for both models.\n",
2832-
"A distibution deviating from the diagonal means that one model has better\n",
2832+
"A distribution deviating from the diagonal means that one model has better\n",
28332833
"predictive performance than the other.\n",
28342834
"\n"
28352835
]

tutorials/notebooks/vim2/01_extract_motion_energy.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@
5858
"cell_type": "markdown",
5959
"metadata": {},
6060
"source": [
61-
"## Compute the luminance\n\nThe motion energy is typically not computed on RGB (color) images,\nbut on the luminance channel of the LAB color space.\nTo avoid loading the entire simulus array in memory, we use batches of data.\nThese batches can be arbitray, since the luminance is computed independently\non each image.\n\n"
61+
"## Compute the luminance\n\nThe motion energy is typically not computed on RGB (color) images,\nbut on the luminance channel of the LAB color space.\nTo avoid loading the entire simulus array in memory, we use batches of data.\nThese batches can be arbitrary, since the luminance is computed independently\non each image.\n\n"
6262
]
6363
},
6464
{

tutorials/notebooks/vim2/02_plot_ridge_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@
285285
"cell_type": "markdown",
286286
"metadata": {},
287287
"source": [
288-
"Here we plot the comparison of model performances with a 2D histogram. All\n~70k voxels are represented in this histogram, where the diagonal corresponds\nto identical performance for both models. A distibution deviating from the\ndiagonal means that one model has better predictive performances than the\nother.\n\n"
288+
"Here we plot the comparison of model performances with a 2D histogram. All\n~70k voxels are represented in this histogram, where the diagonal corresponds\nto identical performance for both models. A distribution deviating from the\ndiagonal means that one model has better predictive performances than the\nother.\n\n"
289289
]
290290
},
291291
{

0 commit comments

Comments
 (0)