Skip to content
forked from pydata/xarray

Commit

Permalink
Merge branch 'master' into fix/bounds_encode_2
Browse files Browse the repository at this point in the history
* master: (31 commits)
  Add quantile method to GroupBy (pydata#2828)
  rolling_exp (nee ewm) (pydata#2650)
  Ensure explicitly indexed arrays are preserved (pydata#3027)
  add back dask-dev tests (pydata#3025)
  ENH: keepdims=True for xarray reductions (pydata#3033)
  Revert cmap fix (pydata#3038)
  Add "errors" keyword argument to drop() and drop_dims() (pydata#2994) (pydata#3028)
  More consistency checks (pydata#2859)
  Check types in travis (pydata#3024)
  Update issue templates (pydata#3019)
  Add pytest markers to avoid warnings (pydata#3023)
  Feature/merge errormsg (pydata#2971)
  More support for missing_value. (pydata#2973)
  Use flake8 rather than pycodestyle (pydata#3010)
  Pandas labels deprecation (pydata#3016)
  Pytest capture uses match, not message (pydata#3011)
  dask-dev tests to allowed failures in travis (pydata#3014)
  Fix 'to_masked_array' computing dask arrays twice (pydata#3006)
  str accessor (pydata#2991)
  fix safe_cast_to_index (pydata#3001)
  ...
  • Loading branch information
dcherian committed Jun 24, 2019
2 parents 34d0e60 + b054c31 commit f187ca1
Show file tree
Hide file tree
Showing 74 changed files with 3,255 additions and 386 deletions.
30 changes: 30 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---

#### MCVE Code Sample

In order for the maintainers to efficiently understand and prioritize issues, we ask you post a "Minimal, Complete and Verifiable Example" (MCVE): http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports

```python
# Your code here

```

#### Problem Description

[this should explain **why** the current behavior is a problem and why the expected output is a better solution.]

#### Expected Output

#### Output of ``xr.show_versions()``

<details>
# Paste the output here xr.show_versions() here

</details>
18 changes: 4 additions & 14 deletions .pep8speaks.yml
Original file line number Diff line number Diff line change
@@ -1,16 +1,6 @@
# File : .pep8speaks.yml

# This should be kept in sync with the duplicate config in the [pycodestyle]
# block of setup.cfg.
# https://github.com/OrkoHunter/pep8speaks for more info
# pep8speaks will use the flake8 configs in `setup.cfg`

scanner:
diff_only: False # If True, errors caused by only the patch are shown

pycodestyle:
max-line-length: 79
ignore: # Errors and warnings to ignore
- E402 # module level import not at top of file
- E731 # do not assign a lambda expression, use a def
- E741 # ambiguous variable name
- W503 # line break before binary operator
- W504 # line break after binary operator
diff_only: False
linter: flake8
14 changes: 10 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ matrix:
- env: CONDA_ENV=py36-zarr-dev
- env: CONDA_ENV=docs
- env: CONDA_ENV=lint
- env: CONDA_ENV=typing
- env: CONDA_ENV=py36-hypothesis

allow_failures:
Expand All @@ -30,6 +31,7 @@ matrix:
- EXTRA_FLAGS="--run-flaky --run-network-tests"
- env: CONDA_ENV=py36-pandas-dev
- env: CONDA_ENV=py36-zarr-dev
- env: CONDA_ENV=typing

before_install:
- wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
Expand All @@ -40,9 +42,10 @@ before_install:
- conda info -a

install:
- if [[ "$CONDA_ENV" == "docs" ]]; then
- |
if [[ "$CONDA_ENV" == "docs" ]]; then
conda env create -n test_env --file doc/environment.yml;
elif [[ "$CONDA_ENV" == "lint" ]]; then
elif [[ "$CONDA_ENV" == "lint" ]] || [[ "$CONDA_ENV" == "typing" ]] ; then
conda env create -n test_env --file ci/requirements-py37.yml;
else
conda env create -n test_env --file ci/requirements-$CONDA_ENV.yml;
Expand All @@ -56,11 +59,14 @@ script:
- which python
- python --version
- python -OO -c "import xarray"
- if [[ "$CONDA_ENV" == "docs" ]]; then
- |
if [[ "$CONDA_ENV" == "docs" ]]; then
cd doc;
sphinx-build -n -j auto -b html -d _build/doctrees . _build/html;
elif [[ "$CONDA_ENV" == "lint" ]]; then
pycodestyle xarray ;
flake8 ;
elif [[ "$CONDA_ENV" == "typing" ]]; then
mypy . ;
elif [[ "$CONDA_ENV" == "py36-hypothesis" ]]; then
pytest properties ;
else
Expand Down
5 changes: 2 additions & 3 deletions asv_bench/benchmarks/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import, division, print_function

import itertools

import numpy as np
Expand Down
5 changes: 2 additions & 3 deletions ci/requirements-py36.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ dependencies:
- pytest-cov
- pytest-env
- coveralls
- pycodestyle
- flake8
- numpy>=1.12
- pandas>=0.19
- scipy
Expand All @@ -24,12 +24,11 @@ dependencies:
- bottleneck
- zarr
- pseudonetcdf>=3.0.1
- eccodes
- cfgrib>=0.9.2
- cdms2
- pynio
- iris>=1.10
- pydap
- lxml
- pip:
- cfgrib>=0.9.2
- mypy==0.660
6 changes: 3 additions & 3 deletions ci/requirements-py37.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ dependencies:
- pytest-cov
- pytest-env
- coveralls
- pycodestyle
- flake8
- numpy>=1.12
- pandas>=0.19
- scipy
Expand All @@ -25,9 +25,9 @@ dependencies:
- bottleneck
- zarr
- pseudonetcdf>=3.0.1
- cfgrib>=0.9.2
- lxml
- eccodes
- pydap
- pip:
- cfgrib>=0.9.2
- mypy==0.650
- numbagg
20 changes: 20 additions & 0 deletions conftest.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,29 @@
"""Configuration for pytest."""

import pytest


def pytest_addoption(parser):
"""Add command-line flags for pytest."""
parser.addoption("--run-flaky", action="store_true",
help="runs flaky tests")
parser.addoption("--run-network-tests", action="store_true",
help="runs tests requiring a network connection")


def pytest_collection_modifyitems(config, items):

if not config.getoption("--run-flaky"):
skip_flaky = pytest.mark.skip(
reason="set --run-flaky option to run flaky tests")
for item in items:
if "flaky" in item.keywords:
item.add_marker(skip_flaky)

if not config.getoption("--run-network-tests"):
skip_network = pytest.mark.skip(
reason="set --run-network-tests option to run tests requiring an"
"internet connection")
for item in items:
if "network" in item.keywords:
item.add_marker(skip_network)
1 change: 1 addition & 0 deletions doc/api-hidden.rst
Original file line number Diff line number Diff line change
Expand Up @@ -153,3 +153,4 @@

CFTimeIndex.shift
CFTimeIndex.to_datetimeindex
CFTimeIndex.strftime
18 changes: 17 additions & 1 deletion doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,7 @@ Computation
Dataset.groupby
Dataset.groupby_bins
Dataset.rolling
Dataset.rolling_exp
Dataset.coarsen
Dataset.resample
Dataset.diff
Expand Down Expand Up @@ -189,6 +190,7 @@ Computation
:py:attr:`~core.groupby.DatasetGroupBy.last`
:py:attr:`~core.groupby.DatasetGroupBy.fillna`
:py:attr:`~core.groupby.DatasetGroupBy.where`
:py:attr:`~core.groupby.DatasetGroupBy.quantile`

Reshaping and reorganizing
--------------------------
Expand Down Expand Up @@ -315,6 +317,7 @@ Computation
DataArray.groupby
DataArray.groupby_bins
DataArray.rolling
DataArray.rolling_exp
DataArray.coarsen
DataArray.dt
DataArray.resample
Expand All @@ -324,6 +327,7 @@ Computation
DataArray.quantile
DataArray.differentiate
DataArray.integrate
DataArray.str

**Aggregation**:
:py:attr:`~DataArray.all`
Expand Down Expand Up @@ -359,7 +363,7 @@ Computation
:py:attr:`~core.groupby.DataArrayGroupBy.last`
:py:attr:`~core.groupby.DataArrayGroupBy.fillna`
:py:attr:`~core.groupby.DataArrayGroupBy.where`

:py:attr:`~core.groupby.DataArrayGroupBy.quantile`

Reshaping and reorganizing
--------------------------
Expand Down Expand Up @@ -460,6 +464,7 @@ Dataset methods
:toctree: generated/

open_dataset
load_dataset
open_mfdataset
open_rasterio
open_zarr
Expand Down Expand Up @@ -487,6 +492,7 @@ DataArray methods
:toctree: generated/

open_dataarray
load_dataarray
DataArray.to_dataset
DataArray.to_netcdf
DataArray.to_pandas
Expand Down Expand Up @@ -532,6 +538,7 @@ Rolling objects
core.rolling.DatasetRolling
core.rolling.DatasetRolling.construct
core.rolling.DatasetRolling.reduce
core.rolling_exp.RollingExp

Resample objects
================
Expand All @@ -555,6 +562,15 @@ Resample objects also implement the GroupBy interface
core.resample.DatasetResample.nearest
core.resample.DatasetResample.pad

Accessors
=========

.. autosummary::
:toctree: generated/

core.accessor_dt.DatetimeAccessor
core.accessor_str.StringAccessor

Custom Indexes
==============
.. autosummary::
Expand Down
49 changes: 43 additions & 6 deletions doc/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,12 @@ Use :py:func:`~xarray.where` to conditionally switch between values:
xr.where(arr > 0, 'positive', 'negative')
Use `@` to perform matrix multiplication:

.. ipython:: python
arr @ arr
Data arrays also implement many :py:class:`numpy.ndarray` methods:

.. ipython:: python
Expand Down Expand Up @@ -143,20 +149,35 @@ name of the dimension as a key (e.g. ``y``) and the window size as the value
arr.rolling(y=3)
The label position and minimum number of periods in the rolling window are
controlled by the ``center`` and ``min_periods`` arguments:
Aggregation and summary methods can be applied directly to the ``Rolling``
object:

.. ipython:: python
arr.rolling(y=3, min_periods=2, center=True)
r = arr.rolling(y=3)
r.reduce(np.std)
r.mean()
Aggregation and summary methods can be applied directly to the ``Rolling`` object:
Aggregation results are assigned the coordinate at the end of each window by
default, but can be centered by passing ``center=True`` when constructing the
``Rolling`` object:

.. ipython:: python
r = arr.rolling(y=3)
r = arr.rolling(y=3, center=True)
r.mean()
As can be seen above, aggregations of windows which overlap the border of the
array produce ``nan``s. Setting ``min_periods`` in the call to ``rolling``
changes the minimum number of observations within the window required to have
a value when aggregating:

.. ipython:: python
r = arr.rolling(y=3, min_periods=2)
r.mean()
r = arr.rolling(y=3, center=True, min_periods=2)
r.mean()
r.reduce(np.std)
Note that rolling window aggregations are faster when bottleneck_ is installed.

Expand All @@ -169,6 +190,22 @@ We can also manually iterate through ``Rolling`` objects:
for label, arr_window in r:
# arr_window is a view of x
.. _comput.rolling_exp:

While ``rolling`` provides a simple moving average, ``DataArray`` also supports
an exponential moving average with :py:meth:`~xarray.DataArray.rolling_exp`.
This is similiar to pandas' ``ewm`` method. numbagg_ is required.

.. _numbagg: https://github.com/shoyer/numbagg

.. code:: python
arr.rolling_exp(y=3).mean()
The ``rolling_exp`` method takes a ``window_type`` kwarg, which can be ``'alpha'``,
``'com'`` (for ``center-of-mass``), ``'span'``, and ``'halflife'``. The default is
``span``.

Finally, the rolling object has a ``construct`` method which returns a
view of the original ``DataArray`` with the windowed dimension in
the last position.
Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@
# serve to show the default.
from __future__ import absolute_import, division, print_function

from contextlib import suppress
import datetime
import os
import subprocess
import sys
from contextlib import suppress

import xarray

Expand Down
8 changes: 3 additions & 5 deletions doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -351,20 +351,18 @@ the more common ``PEP8`` issues:
- passing arguments should have spaces after commas, e.g. ``foo(arg1, arg2, kw1='bar')``

:ref:`Continuous Integration <contributing.ci>` will run
the `pycodestyle <http://pypi.python.org/pypi/pycodestyle>`_ tool
the `flake8 <http://flake8.pycqa.org/en/latest/>`_ tool
and report any stylistic errors in your code. Therefore, it is helpful before
submitting code to run the check yourself::
submitting code to run the check yourself:

pycodestyle xarray
flake8

Other recommended but optional tools for checking code quality (not currently
enforced in CI):

- `mypy <http://mypy-lang.org/>`_ performs static type checking, which can
make it easier to catch bugs. Please run ``mypy xarray`` if you annotate any
code with `type hints <https://docs.python.org/3/library/typing.html>`_.
- `flake8 <http://pypi.python.org/pypi/flake8>`_ includes a few more automated
checks than those enforced by pycodestyle.
- `isort <https://github.com/timothycrosley/isort>`_ will highlight
incorrectly sorted imports. ``isort -y`` will automatically fix them. See
also `flake8-isort <https://github.com/gforcada/flake8-isort>`_.
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/_code/weather_data_setup.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import numpy as np
import pandas as pd
import seaborn as sns # pandas aware plotting library
import seaborn as sns # noqa, pandas aware plotting library

import xarray as xr

Expand Down
2 changes: 2 additions & 0 deletions doc/installing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ For accelerating xarray
- `bottleneck <https://github.com/kwgoodman/bottleneck>`__: speeds up
NaN-skipping and rolling window aggregations by a large factor
(1.1 or later)
- `numbagg <https://github.com/shoyer/numbagg>`_: for exponential rolling
window operations

For parallel computing
~~~~~~~~~~~~~~~~~~~~~~
Expand Down
Loading

0 comments on commit f187ca1

Please sign in to comment.