Skip to content

Commit

Permalink
fix typos (using codespell) (#6316)
Browse files Browse the repository at this point in the history
* fix typos (using codespell)

* revert 'split'
  • Loading branch information
mathause authored Mar 2, 2022
1 parent 2ab9f36 commit cdab326
Show file tree
Hide file tree
Showing 32 changed files with 56 additions and 56 deletions.
2 changes: 1 addition & 1 deletion doc/examples/ROMS_ocean_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
"ds = xr.tutorial.open_dataset(\"ROMS_example.nc\", chunks={\"ocean_time\": 1})\n",
"\n",
"# This is a way to turn on chunking and lazy evaluation. Opening with mfdataset, or\n",
"# setting the chunking in the open_dataset would also achive this.\n",
"# setting the chunking in the open_dataset would also achieve this.\n",
"ds"
]
},
Expand Down
2 changes: 1 addition & 1 deletion doc/gallery/plot_colorbar_center.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,6 @@
ax4.set_title("Celsius: center=False")
ax4.set_ylabel("")

# Mke it nice
# Make it nice
plt.tight_layout()
plt.show()
4 changes: 2 additions & 2 deletions doc/internals/how-to-add-new-backend.rst
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ grouped in three types of indexes
:py:class:`~xarray.core.indexing.OuterIndexer` and
:py:class:`~xarray.core.indexing.VectorizedIndexer`.
This implies that the implementation of the method ``__getitem__`` can be tricky.
In oder to simplify this task, Xarray provides a helper function,
In order to simplify this task, Xarray provides a helper function,
:py:func:`~xarray.core.indexing.explicit_indexing_adapter`, that transforms
all the input ``indexer`` types (`basic`, `outer`, `vectorized`) in a tuple
which is interpreted correctly by your backend.
Expand Down Expand Up @@ -426,7 +426,7 @@ The ``OUTER_1VECTOR`` indexing shall supports number, slices and at most one
list. The behaviour with the list shall be the same of ``OUTER`` indexing.

If you support more complex indexing as `explicit indexing` or
`numpy indexing`, you can have a look to the implemetation of Zarr backend and Scipy backend,
`numpy indexing`, you can have a look to the implementation of Zarr backend and Scipy backend,
currently available in :py:mod:`~xarray.backends` module.

.. _RST preferred_chunks:
Expand Down
2 changes: 1 addition & 1 deletion doc/internals/zarr-encoding-spec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ for the storage of the NetCDF data model in Zarr; see
discussion.

First, Xarray can only read and write Zarr groups. There is currently no support
for reading / writting individual Zarr arrays. Zarr groups are mapped to
for reading / writing individual Zarr arrays. Zarr groups are mapped to
Xarray ``Dataset`` objects.

Second, from Xarray's point of view, the key difference between
Expand Down
2 changes: 1 addition & 1 deletion doc/roadmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ A cleaner model would be to elevate ``indexes`` to an explicit part of
xarray's data model, e.g., as attributes on the ``Dataset`` and
``DataArray`` classes. Indexes would need to be propagated along with
coordinates in xarray operations, but will no longer would need to have
a one-to-one correspondance with coordinate variables. Instead, an index
a one-to-one correspondence with coordinate variables. Instead, an index
should be able to refer to multiple (possibly multidimensional)
coordinates that define it. See `GH
1603 <https://github.com/pydata/xarray/issues/1603>`__ for full details
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/time-series.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ You can also select a particular time by indexing with a
ds.sel(time=datetime.time(12))
For more details, read the pandas documentation and the section on `Indexing Using Datetime Components <datetime_component_indexing>`_ (i.e. using the ``.dt`` acessor).
For more details, read the pandas documentation and the section on `Indexing Using Datetime Components <datetime_component_indexing>`_ (i.e. using the ``.dt`` accessor).

.. _dt_accessor:

Expand Down
12 changes: 6 additions & 6 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ Bug fixes
By `Michael Delgado <https://github.com/delgadom>`_.
- `dt.season <https://docs.xarray.dev/en/stable/generated/xarray.DataArray.dt.season.html>`_ can now handle NaN and NaT. (:pull:`5876`).
By `Pierre Loicq <https://github.com/pierreloicq>`_.
- Determination of zarr chunks handles empty lists for encoding chunks or variable chunks that occurs in certain cirumstances (:pull:`5526`). By `Chris Roat <https://github.com/chrisroat>`_.
- Determination of zarr chunks handles empty lists for encoding chunks or variable chunks that occurs in certain circumstances (:pull:`5526`). By `Chris Roat <https://github.com/chrisroat>`_.

Internal Changes
~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -706,7 +706,7 @@ Breaking changes
By `Alessandro Amici <https://github.com/alexamici>`_.
- Functions that are identities for 0d data return the unchanged data
if axis is empty. This ensures that Datasets where some variables do
not have the averaged dimensions are not accidentially changed
not have the averaged dimensions are not accidentally changed
(:issue:`4885`, :pull:`5207`).
By `David Schwörer <https://github.com/dschwoerer>`_.
- :py:attr:`DataArray.coarsen` and :py:attr:`Dataset.coarsen` no longer support passing ``keep_attrs``
Expand Down Expand Up @@ -1419,7 +1419,7 @@ New Features
Enhancements
~~~~~~~~~~~~
- Performance improvement of :py:meth:`DataArray.interp` and :py:func:`Dataset.interp`
We performs independant interpolation sequentially rather than interpolating in
We performs independent interpolation sequentially rather than interpolating in
one large multidimensional space. (:issue:`2223`)
By `Keisuke Fujii <https://github.com/fujiisoup>`_.
- :py:meth:`DataArray.interp` now support interpolations over chunked dimensions (:pull:`4155`). By `Alexandre Poux <https://github.com/pums974>`_.
Expand Down Expand Up @@ -2770,7 +2770,7 @@ Breaking changes
- ``Dataset.T`` has been removed as a shortcut for :py:meth:`Dataset.transpose`.
Call :py:meth:`Dataset.transpose` directly instead.
- Iterating over a ``Dataset`` now includes only data variables, not coordinates.
Similarily, calling ``len`` and ``bool`` on a ``Dataset`` now
Similarly, calling ``len`` and ``bool`` on a ``Dataset`` now
includes only data variables.
- ``DataArray.__contains__`` (used by Python's ``in`` operator) now checks
array data, not coordinates.
Expand Down Expand Up @@ -3908,7 +3908,7 @@ Bug fixes
(:issue:`1606`).
By `Joe Hamman <https://github.com/jhamman>`_.

- Fix bug when using ``pytest`` class decorators to skiping certain unittests.
- Fix bug when using ``pytest`` class decorators to skipping certain unittests.
The previous behavior unintentionally causing additional tests to be skipped
(:issue:`1531`). By `Joe Hamman <https://github.com/jhamman>`_.

Expand Down Expand Up @@ -5656,7 +5656,7 @@ Bug fixes
- Several bug fixes related to decoding time units from netCDF files
(:issue:`316`, :issue:`330`). Thanks Stefan Pfenninger!
- xray no longer requires ``decode_coords=False`` when reading datasets with
unparseable coordinate attributes (:issue:`308`).
unparsable coordinate attributes (:issue:`308`).
- Fixed ``DataArray.loc`` indexing with ``...`` (:issue:`318`).
- Fixed an edge case that resulting in an error when reindexing
multi-dimensional variables (:issue:`315`).
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def sync(self, compute=True):
import dask.array as da

# TODO: consider wrapping targets with dask.delayed, if this makes
# for any discernable difference in perforance, e.g.,
# for any discernible difference in perforance, e.g.,
# targets = [dask.delayed(t) for t in self.targets]

delayed_store = da.store(
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/file_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ def _acquire_with_cache_info(self, needs_lock=True):
kwargs["mode"] = self._mode
file = self._opener(*self._args, **kwargs)
if self._mode == "w":
# ensure file doesn't get overriden when opened again
# ensure file doesn't get overridden when opened again
self._mode = "a"
self._cache[self._key] = file
return file, False
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/pseudonetcdf_.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ class PseudoNetCDFBackendEntrypoint(BackendEntrypoint):
available = has_pseudonetcdf

# *args and **kwargs are not allowed in open_backend_dataset_ kwargs,
# unless the open_dataset_parameters are explicity defined like this:
# unless the open_dataset_parameters are explicitly defined like this:
open_dataset_parameters = (
"filename_or_obj",
"mask_and_scale",
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def _determine_zarr_chunks(enc_chunks, var_chunks, ndim, name, safe_chunks):


def _get_zarr_dims_and_attrs(zarr_obj, dimension_key):
# Zarr arrays do not have dimenions. To get around this problem, we add
# Zarr arrays do not have dimensions. To get around this problem, we add
# an attribute that specifies the dimension. We have to hide this attribute
# when we send the attributes to the user.
# zarr_obj can be either a zarr group or zarr array
Expand Down
2 changes: 1 addition & 1 deletion xarray/convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ def _iris_cell_methods_to_str(cell_methods_obj):


def _name(iris_obj, default="unknown"):
"""Mimicks `iris_obj.name()` but with different name resolution order.
"""Mimics `iris_obj.name()` but with different name resolution order.
Similar to iris_obj.name() method, but using iris_obj.var_name first to
enable roundtripping.
Expand Down
4 changes: 2 additions & 2 deletions xarray/core/accessor_str.py
Original file line number Diff line number Diff line change
Expand Up @@ -456,7 +456,7 @@ def cat(
Strings or array-like of strings to concatenate elementwise with
the current DataArray.
sep : str or array-like of str, default: "".
Seperator to use between strings.
Separator to use between strings.
It is broadcast in the same way as the other input strings.
If array-like, its dimensions will be placed at the end of the output array dimensions.
Expand Down Expand Up @@ -539,7 +539,7 @@ def join(
Only one dimension is allowed at a time.
Optional for 0D or 1D DataArrays, required for multidimensional DataArrays.
sep : str or array-like, default: "".
Seperator to use between strings.
Separator to use between strings.
It is broadcast in the same way as the other input strings.
If array-like, its dimensions will be placed at the end of the output array dimensions.
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ def _infer_concat_order_from_coords(datasets):
order = rank.astype(int).values - 1

# Append positions along extra dimension to structure which
# encodes the multi-dimensional concatentation order
# encodes the multi-dimensional concatenation order
tile_ids = [
tile_id + (position,) for tile_id, position in zip(tile_ids, order)
]
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/computation.py
Original file line number Diff line number Diff line change
Expand Up @@ -1941,7 +1941,7 @@ def unify_chunks(*objects: T_Xarray) -> tuple[T_Xarray, ...]:
for obj in objects
]

# Get argumets to pass into dask.array.core.unify_chunks
# Get arguments to pass into dask.array.core.unify_chunks
unify_chunks_args = []
sizes: dict[Hashable, int] = {}
for ds in datasets:
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -455,7 +455,7 @@ def _dataset_concat(
if (dim in coord_names or dim in data_names) and dim not in dim_names:
datasets = [ds.expand_dims(dim) for ds in datasets]

# determine which variables to concatentate
# determine which variables to concatenate
concat_over, equals, concat_dim_lengths = _calc_concat_over(
datasets, dim, dim_names, data_vars, coords, compat
)
Expand Down
4 changes: 2 additions & 2 deletions xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -2584,7 +2584,7 @@ def interpolate_na(
)

def ffill(self, dim: Hashable, limit: int = None) -> DataArray:
"""Fill NaN values by propogating values forward
"""Fill NaN values by propagating values forward
*Requires bottleneck.*
Expand All @@ -2609,7 +2609,7 @@ def ffill(self, dim: Hashable, limit: int = None) -> DataArray:
return ffill(self, dim, limit=limit)

def bfill(self, dim: Hashable, limit: int = None) -> DataArray:
"""Fill NaN values by propogating values backward
"""Fill NaN values by propagating values backward
*Requires bottleneck.*
Expand Down
12 changes: 6 additions & 6 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -379,7 +379,7 @@ def _check_chunks_compatibility(var, chunks, preferred_chunks):


def _get_chunk(var, chunks):
# chunks need to be explicity computed to take correctly into accout
# chunks need to be explicitly computed to take correctly into account
# backend preferred chunking
import dask.array as da

Expand Down Expand Up @@ -1529,7 +1529,7 @@ def __setitem__(self, key: Hashable | list[Hashable] | Mapping, value) -> None:
except Exception as e:
if processed:
raise RuntimeError(
"An error occured while setting values of the"
"An error occurred while setting values of the"
f" variable '{name}'. The following variables have"
f" been successfully updated:\n{processed}"
) from e
Expand Down Expand Up @@ -1976,7 +1976,7 @@ def to_zarr(
metadata for existing stores (falling back to non-consolidated).
append_dim : hashable, optional
If set, the dimension along which the data will be appended. All
other dimensions on overriden variables must remain the same size.
other dimensions on overridden variables must remain the same size.
region : dict, optional
Optional mapping from dimension names to integer slices along
dataset dimensions to indicate the region of existing zarr array(s)
Expand All @@ -2001,7 +2001,7 @@ def to_zarr(
Set False to override this restriction; however, data may become corrupted
if Zarr arrays are written in parallel. This option may be useful in combination
with ``compute=False`` to initialize a Zarr from an existing
Dataset with aribtrary chunk structure.
Dataset with arbitrary chunk structure.
storage_options : dict, optional
Any additional parameters for the storage backend (ignored for local
paths).
Expand Down Expand Up @@ -4930,7 +4930,7 @@ def interpolate_na(
return new

def ffill(self, dim: Hashable, limit: int = None) -> Dataset:
"""Fill NaN values by propogating values forward
"""Fill NaN values by propagating values forward
*Requires bottleneck.*
Expand All @@ -4956,7 +4956,7 @@ def ffill(self, dim: Hashable, limit: int = None) -> Dataset:
return new

def bfill(self, dim: Hashable, limit: int = None) -> Dataset:
"""Fill NaN values by propogating values backward
"""Fill NaN values by propagating values backward
*Requires bottleneck.*
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -587,7 +587,7 @@ def merge_core(
Parameters
----------
objects : list of mapping
All values must be convertable to labeled arrays.
All values must be convertible to labeled arrays.
compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, optional
Compatibility checks to use when merging variables.
join : {"outer", "inner", "left", "right"}, optional
Expand Down
8 changes: 4 additions & 4 deletions xarray/core/missing.py
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ def _localize(var, indexes_coords):

def _floatize_x(x, new_x):
"""Make x and new_x float.
This is particulary useful for datetime dtype.
This is particularly useful for datetime dtype.
x, new_x: tuple of np.ndarray
"""
x = list(x)
Expand Down Expand Up @@ -624,7 +624,7 @@ def interp(var, indexes_coords, method, **kwargs):
kwargs["bounds_error"] = kwargs.get("bounds_error", False)

result = var
# decompose the interpolation into a succession of independant interpolation
# decompose the interpolation into a succession of independent interpolation
for indexes_coords in decompose_interp(indexes_coords):
var = result

Expand Down Expand Up @@ -731,7 +731,7 @@ def interp_func(var, x, new_x, method, kwargs):
for i in range(new_x[0].ndim)
}

# if usefull, re-use localize for each chunk of new_x
# if useful, re-use localize for each chunk of new_x
localize = (method in ["linear", "nearest"]) and (new_x[0].chunks is not None)

# scipy.interpolate.interp1d always forces to float.
Expand Down Expand Up @@ -825,7 +825,7 @@ def _dask_aware_interpnd(var, *coords, interp_func, interp_kwargs, localize=True


def decompose_interp(indexes_coords):
"""Decompose the interpolation into a succession of independant interpolation keeping the order"""
"""Decompose the interpolation into a succession of independent interpolation keeping the order"""

dest_dims = [
dest[1].dims if dest[1].ndim > 0 else [dim]
Expand Down
4 changes: 2 additions & 2 deletions xarray/core/rolling.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
Returns
-------
reduced : same type as caller
New object with `{name}` applied along its rolling dimnension.
New object with `{name}` applied along its rolling dimension.
"""


Expand Down Expand Up @@ -767,7 +767,7 @@ def __init__(self, obj, windows, boundary, side, coord_func):
exponential window along (e.g. `time`) to the size of the moving window.
boundary : 'exact' | 'trim' | 'pad'
If 'exact', a ValueError will be raised if dimension size is not a
multiple of window size. If 'trim', the excess indexes are trimed.
multiple of window size. If 'trim', the excess indexes are trimmed.
If 'pad', NA will be padded.
side : 'left' or 'right' or mapping from dimension to 'left' or 'right'
coord_func : mapping from coordinate name to func.
Expand Down
2 changes: 1 addition & 1 deletion xarray/core/variable.py
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,7 @@ def _broadcast_indexes_outer(self, key):
return dims, OuterIndexer(tuple(new_key)), None

def _nonzero(self):
"""Equivalent numpy's nonzero but returns a tuple of Varibles."""
"""Equivalent numpy's nonzero but returns a tuple of Variables."""
# TODO we should replace dask's native nonzero
# after https://github.com/dask/dask/issues/1076 is implemented.
nonzeros = np.nonzero(self.data)
Expand Down
6 changes: 3 additions & 3 deletions xarray/tests/test_backends.py
Original file line number Diff line number Diff line change
Expand Up @@ -1937,7 +1937,7 @@ def test_chunk_encoding_with_dask(self):
with self.roundtrip(ds_chunk4) as actual:
assert (4,) == actual["var1"].encoding["chunks"]

# TODO: remove this failure once syncronized overlapping writes are
# TODO: remove this failure once synchronized overlapping writes are
# supported by xarray
ds_chunk4["var1"].encoding.update({"chunks": 5})
with pytest.raises(NotImplementedError, match=r"named 'var1' would overlap"):
Expand Down Expand Up @@ -2255,7 +2255,7 @@ def test_write_region_mode(self, mode):

@requires_dask
def test_write_preexisting_override_metadata(self):
"""Metadata should be overriden if mode="a" but not in mode="r+"."""
"""Metadata should be overridden if mode="a" but not in mode="r+"."""
original = Dataset(
{"u": (("x",), np.zeros(10), {"variable": "original"})},
attrs={"global": "original"},
Expand Down Expand Up @@ -2967,7 +2967,7 @@ def test_open_fileobj(self):
with pytest.raises(TypeError, match="not a valid NetCDF 3"):
open_dataset(f, engine="scipy")

# TOOD: this additional open is required since scipy seems to close the file
# TODO: this additional open is required since scipy seems to close the file
# when it fails on the TypeError (though didn't when we used
# `raises_regex`?). Ref https://github.com/pydata/xarray/pull/5191
with open(tmp_file, "rb") as f:
Expand Down
2 changes: 1 addition & 1 deletion xarray/tests/test_calendar_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def test_convert_calendar_errors():
with pytest.raises(ValueError, match="Argument `align_on` must be specified"):
convert_calendar(src_nl, "360_day")

# Standard doesn't suuport year 0
# Standard doesn't support year 0
with pytest.raises(
ValueError, match="Source time coordinate contains dates with year 0"
):
Expand Down
Loading

0 comments on commit cdab326

Please sign in to comment.