Skip to content
forked from pydata/xarray

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into fix/plot-broadcast
Browse files Browse the repository at this point in the history
* upstream/master:
  format indexing.rst code with black (pydata#3511)
  add missing pint integration tests (pydata#3508)
  DOC: update bottleneck repo url (pydata#3507)
  add drop_sel, drop_vars, map to api.rst (pydata#3506)
  remove syntax warning (pydata#3505)
  Dataset.map, GroupBy.map, Resample.map (pydata#3459)
  tests for datasets with units (pydata#3447)
  fix pandas-dev tests (pydata#3491)
  unpin pseudonetcdf (pydata#3496)
  whatsnew corrections (pydata#3494)
  drop_vars; deprecate drop for variables (pydata#3475)
  uamiv test using only raw uamiv variables (pydata#3485)
  Optimize dask array equality checks. (pydata#3453)
  Propagate indexes in DataArray binary operations. (pydata#3481)
  python 3.8 tests (pydata#3477)
  • Loading branch information
dcherian committed Nov 13, 2019
2 parents 279ff1d + b74f80c commit 4489394
Show file tree
Hide file tree
Showing 37 changed files with 2,703 additions and 417 deletions.
2 changes: 2 additions & 0 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ jobs:
conda_env: py36
py37:
conda_env: py37
py38:
conda_env: py38
py37-upstream-dev:
conda_env: py37
upstream_dev: true
Expand Down
2 changes: 1 addition & 1 deletion ci/azure/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ steps:
--pre \
--upgrade \
matplotlib \
pandas=0.26.0.dev0+628.g03c1a3db2 \ # FIXME https://github.com/pydata/xarray/issues/3440
pandas \
scipy
# numpy \ # FIXME https://github.com/pydata/xarray/issues/3409
pip install \
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/py36.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ dependencies:
- pandas
- pint
- pip
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
- pseudonetcdf
- pydap
- pynio
- pytest
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/py37-windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ dependencies:
- pandas
- pint
- pip
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
- pseudonetcdf
- pydap
# - pynio # Not available on Windows
- pytest
Expand Down
2 changes: 1 addition & 1 deletion ci/requirements/py37.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ dependencies:
- pandas
- pint
- pip
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
- pseudonetcdf
- pydap
- pynio
- pytest
Expand Down
15 changes: 15 additions & 0 deletions ci/requirements/py38.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: xarray-tests
channels:
- conda-forge
dependencies:
- python=3.8
- pip
- pip:
- coveralls
- dask
- distributed
- numpy
- pandas
- pytest
- pytest-cov
- pytest-env
14 changes: 8 additions & 6 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Dataset contents
Dataset.rename_dims
Dataset.swap_dims
Dataset.expand_dims
Dataset.drop
Dataset.drop_vars
Dataset.drop_dims
Dataset.set_coords
Dataset.reset_coords
Expand All @@ -118,6 +118,7 @@ Indexing
Dataset.loc
Dataset.isel
Dataset.sel
Dataset.drop_sel
Dataset.head
Dataset.tail
Dataset.thin
Expand Down Expand Up @@ -154,7 +155,7 @@ Computation
.. autosummary::
:toctree: generated/

Dataset.apply
Dataset.map
Dataset.reduce
Dataset.groupby
Dataset.groupby_bins
Expand Down Expand Up @@ -263,7 +264,7 @@ DataArray contents
DataArray.rename
DataArray.swap_dims
DataArray.expand_dims
DataArray.drop
DataArray.drop_vars
DataArray.reset_coords
DataArray.copy

Expand All @@ -283,6 +284,7 @@ Indexing
DataArray.loc
DataArray.isel
DataArray.sel
DataArray.drop_sel
DataArray.head
DataArray.tail
DataArray.thin
Expand Down Expand Up @@ -542,10 +544,10 @@ GroupBy objects
:toctree: generated/

core.groupby.DataArrayGroupBy
core.groupby.DataArrayGroupBy.apply
core.groupby.DataArrayGroupBy.map
core.groupby.DataArrayGroupBy.reduce
core.groupby.DatasetGroupBy
core.groupby.DatasetGroupBy.apply
core.groupby.DatasetGroupBy.map
core.groupby.DatasetGroupBy.reduce

Rolling objects
Expand All @@ -566,7 +568,7 @@ Resample objects
================

Resample objects also implement the GroupBy interface
(methods like ``apply()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
(methods like ``map()``, ``reduce()``, ``mean()``, ``sum()``, etc.).

.. autosummary::
:toctree: generated/
Expand Down
6 changes: 3 additions & 3 deletions doc/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ a value when aggregating:

Note that rolling window aggregations are faster and use less memory when bottleneck_ is installed. This only applies to numpy-backed xarray objects.

.. _bottleneck: https://github.com/kwgoodman/bottleneck/
.. _bottleneck: https://github.com/pydata/bottleneck/

We can also manually iterate through ``Rolling`` objects:

Expand Down Expand Up @@ -462,13 +462,13 @@ Datasets support most of the same methods found on data arrays:
abs(ds)
Datasets also support NumPy ufuncs (requires NumPy v1.13 or newer), or
alternatively you can use :py:meth:`~xarray.Dataset.apply` to apply a function
alternatively you can use :py:meth:`~xarray.Dataset.map` to map a function
to each variable in a dataset:

.. ipython:: python
np.sin(ds)
ds.apply(np.sin)
ds.map(np.sin)
Datasets also use looping over variables for *broadcasting* in binary
arithmetic. You can do arithmetic between any ``DataArray`` and a dataset:
Expand Down
2 changes: 1 addition & 1 deletion doc/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ For the best performance when using Dask's multi-threaded scheduler, wrap a
function that already releases the global interpreter lock, which fortunately
already includes most NumPy and Scipy functions. Here we show an example
using NumPy operations and a fast function from
`bottleneck <https://github.com/kwgoodman/bottleneck>`__, which
`bottleneck <https://github.com/pydata/bottleneck>`__, which
we use to calculate `Spearman's rank-correlation coefficient <https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient>`__:

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions doc/data-structures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -393,14 +393,14 @@ methods (like pandas) for transforming datasets into new objects.

For removing variables, you can select and drop an explicit list of
variables by indexing with a list of names or using the
:py:meth:`~xarray.Dataset.drop` methods to return a new ``Dataset``. These
:py:meth:`~xarray.Dataset.drop_vars` methods to return a new ``Dataset``. These
operations keep around coordinates:

.. ipython:: python
ds[['temperature']]
ds[['temperature', 'temperature_double']]
ds.drop('temperature')
ds.drop_vars('temperature')
To remove a dimension, you can use :py:meth:`~xarray.Dataset.drop_dims` method.
Any variables using that dimension are dropped:
Expand Down
15 changes: 8 additions & 7 deletions doc/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,11 @@ Let's create a simple example dataset:
.. ipython:: python
ds = xr.Dataset({'foo': (('x', 'y'), np.random.rand(4, 3))},
coords={'x': [10, 20, 30, 40],
'letters': ('x', list('abba'))})
arr = ds['foo']
ds = xr.Dataset(
{"foo": (("x", "y"), np.random.rand(4, 3))},
coords={"x": [10, 20, 30, 40], "letters": ("x", list("abba"))},
)
arr = ds["foo"]
ds
If we groupby the name of a variable or coordinate in a dataset (we can also
Expand Down Expand Up @@ -93,15 +94,15 @@ Apply
~~~~~

To apply a function to each group, you can use the flexible
:py:meth:`~xarray.DatasetGroupBy.apply` method. The resulting objects are automatically
:py:meth:`~xarray.DatasetGroupBy.map` method. The resulting objects are automatically
concatenated back together along the group axis:

.. ipython:: python
def standardize(x):
return (x - x.mean()) / x.std()
arr.groupby('letters').apply(standardize)
arr.groupby('letters').map(standardize)
GroupBy objects also have a :py:meth:`~xarray.DatasetGroupBy.reduce` method and
methods like :py:meth:`~xarray.DatasetGroupBy.mean` as shortcuts for applying an
Expand Down Expand Up @@ -202,7 +203,7 @@ __ http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_two_dimen
dims=['ny','nx'])
da
da.groupby('lon').sum(...)
da.groupby('lon').apply(lambda x: x - x.mean(), shortcut=False)
da.groupby('lon').map(lambda x: x - x.mean(), shortcut=False)
Because multidimensional groups have the ability to generate a very large
number of bins, coarse-binning via :py:meth:`~xarray.Dataset.groupby_bins`
Expand Down
2 changes: 1 addition & 1 deletion doc/howdoi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ How do I ...
* - convert a possibly irregularly sampled timeseries to a regularly sampled timeseries
- :py:meth:`DataArray.resample`, :py:meth:`Dataset.resample` (see :ref:`resampling` for more)
* - apply a function on all data variables in a Dataset
- :py:meth:`Dataset.apply`
- :py:meth:`Dataset.map`
* - write xarray objects with complex values to a netCDF file
- :py:func:`Dataset.to_netcdf`, :py:func:`DataArray.to_netcdf` specifying ``engine="h5netcdf", invalid_netcdf=True``
* - make xarray objects look like other xarray objects
Expand Down
Loading

0 comments on commit 4489394

Please sign in to comment.