Skip to content

Commit

Permalink
Correct typos (#117)
Browse files Browse the repository at this point in the history
* Fix rST typo in docs/algo

* Fix some spelling

* `.chunk()`
  • Loading branch information
zmoon authored Jul 8, 2022
1 parent a8e37f7 commit 99f8bad
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions algorithm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@
" dtype: np.dtype,\n",
" max_mem: int) -> Tuple[Tuple[int]]:\n",
" \"\"\"\n",
" Calcualte the chunk shape for an intermediate dataset.\n",
" Calculate the chunk shape for an intermediate dataset.\n",
" \n",
" Parameters\n",
" ----------\n",
Expand All @@ -349,7 +349,7 @@
" assert source_chunk_mem <= max_mem\n",
" assert target_chunk_mem <= max_mem\n",
"\n",
" # Greatest common demoninator chunks.\n",
" # Greatest common denominator chunks.\n",
" # These are the smallest possible chunks which evenly fit into\n",
" # both the source and target.\n",
" # Example:\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/algorithm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The algorithm used by rechunker tries to satisfy several constraints simultaneou
means avoiding write locks, which are complex to manage, and inter-worker
communication.

The algorithm we chose emerged via a lively disucssion on the
The algorithm we chose emerged via a lively discussion on the
`Pangeo Discourse Forum <https://discourse.pangeo.io/t/best-practices-to-go-from-1000s-of-netcdf-files-to-analyses-on-a-hpc-cluster/588>`_.
We call it *Push / Pull Consolidated*.

Expand All @@ -28,7 +28,7 @@ We call it *Push / Pull Consolidated*.
A rough sketch of the algorithm is as follows

1. User inputs a source array with a specific shape, chunk structure and
data type. Also specifies ```target_chunks``, the desired chunk structure
data type. Also specifies ``target_chunks``, the desired chunk structure
of the output array and ``max_mem``, the maximum amount of memory
each worker is allowed to use.
2. Determine the largest batch of data we can *write* by one worker given
Expand Down
4 changes: 2 additions & 2 deletions docs/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ v0.5.0 - 2023-04-14

- Fix major bug with dask executor.
By `Ryan Abernathey <https://github.com/rabernat>`_.
- Enable xarray `.chunk()` style input for target chunks.
- Enable xarray ``.chunk()`` style input for target chunks.
By `Julius Busecke <https://github.com/jbusecke>`_.

v0.4.2 - 2021-04-27
Expand Down Expand Up @@ -43,7 +43,7 @@ v0.3.2 - 2020-12-02
-------------------

- Fixed bug in rechunking of xarray datasets. By `Filipe Fernandes <https://github.com/ocefpaf>`_.
- Internal improvments to tests and packagaging. By `Filipe Fernandes <https://github.com/ocefpaf>`_.
- Internal improvements to tests and packagaging. By `Filipe Fernandes <https://github.com/ocefpaf>`_.
- Updates to tutorial. By `Andrew Brettin <https://github.com/andrewbrettin>`_.

v0.3.1 - 2020-10-13
Expand Down
4 changes: 2 additions & 2 deletions tests/test_algorithm.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ def test_consolidate_chunks(shape, chunks, itemsize, max_mem, expected):
[
(16, (None, -1), (1, 4)), # do last axis
(16, (-1, None), (2, 2)), # do first axis
(32, (None, -1), (1, 8)), # without limts
(32, (None, 4), (1, 4)), # with limts
(32, (None, -1), (1, 8)), # without limits
(32, (None, 4), (1, 4)), # with limits
(32, (8, 4), (2, 4)), # spill to next axis
(32, (8, None), (4, 2)),
(128, (10, None), (8, 2)), # chunk_limit > shape truncated
Expand Down

0 comments on commit 99f8bad

Please sign in to comment.