Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use more descriptive link texts #7625

Merged
merged 5 commits into from
Mar 16, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 8 additions & 9 deletions doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ Bug reports and enhancement requests

Bug reports are an important part of making *xarray* more stable. Having a complete bug
report will allow others to reproduce the bug and provide insight into fixing. See
`this stackoverflow article <https://stackoverflow.com/help/mcve>`_ for tips on
writing a good bug report.
this `stackoverflow article for tips on
writing a good bug report <https://stackoverflow.com/help/mcve>`_ .

Trying out the bug-producing code on the *main* branch is often a worthwhile exercise
to confirm that the bug still exists. It is also worth searching existing bug reports and
Expand Down Expand Up @@ -102,7 +102,7 @@ Some great resources for learning Git:
Getting started with Git
------------------------

`GitHub has instructions <https://help.github.com/set-up-git-redirect>`__ for installing git,
`GitHub has instructions for setting up Git <https://help.github.com/set-up-git-redirect>`__ including installing git,
setting up your SSH key, and configuring git. All these steps need to be completed before
you can work seamlessly between your local repository and GitHub.

Expand Down Expand Up @@ -238,7 +238,7 @@ To return to your root environment::

conda deactivate

See the full conda docs `here <http://conda.pydata.org/docs>`__.
See the full `conda docs here <http://conda.pydata.org/docs>`__.

.. _contributing.documentation:

Expand Down Expand Up @@ -277,7 +277,7 @@ Some other important things to know about the docs:

- The docstrings follow the **NumPy Docstring Standard**, which is used widely
in the Scientific Python community. This standard specifies the format of
the different sections of the docstring. See `this document
the different sections of the docstring. Refer to the `documentation for the Numpy docstring format
<https://numpydoc.readthedocs.io/en/latest/format.html#docstring-standard>`_
for a detailed explanation, or look at some of the existing functions to
extend it in a similar manner.
Expand Down Expand Up @@ -732,8 +732,8 @@ or, to use a specific Python interpreter,::
This will display stderr from the benchmarks, and use your local
``python`` that comes from your ``$PATH``.

Information on how to write a benchmark and how to use asv can be found in the
`asv documentation <https://asv.readthedocs.io/en/latest/writing_benchmarks.html>`_.
Learn `how to write a benchmark and how to use asv from the documentayion <https://asv.readthedocs.io/en/latest/writing_benchmarks.html>`_ .
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

documentation



..
TODO: uncomment once we have a working setup
Expand All @@ -752,8 +752,7 @@ GitHub issue number when adding your entry (using ``:issue:`1234```, where ``123
issue/pull request number).

If your code is an enhancement, it is most likely necessary to add usage
examples to the existing documentation. This can be done following the section
regarding documentation :ref:`above <contributing.documentation>`.
examples to the existing documentation. This can be done by following the :ref:`guidelines for contributing to the documentation <contributing.documentation>`.

.. _contributing.changes:

Expand Down
4 changes: 2 additions & 2 deletions doc/developers-meeting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ Xarray developers meet bi-weekly every other Wednesday.

The meeting occurs on `Zoom <https://us02web.zoom.us/j/88251613296?pwd=azZsSkU1UWJZTVFKNnhIUVdZcENUZz09>`__.

Notes for the meeting are kept `here <https://hackmd.io/@U4W-olO3TX-hc-cvbjNe4A/xarray-dev-meeting/edit>`__.
Find the `notes for the meeting here <https://hackmd.io/@U4W-olO3TX-hc-cvbjNe4A/xarray-dev-meeting/edit>`__.

There is a :issue:`GitHub issue <4001>` for changes to the meeting.
There is a :issue:`GitHub issue for changes to the meeting<4001>`.

You can subscribe to this calendar to be notified of changes:

Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/computation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -804,7 +804,7 @@ to set ``axis=-1``. As an example, here is how we would wrap
Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays
nicely with tools for building vectorized functions, like
:py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance
needs, consider using Numba's :doc:`vectorize and guvectorize <numba:user/vectorize>`.
needs, consider using :doc:`Numba's vectorize and guvectorize <numba:user/vectorize>`.

In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize
many functions when using dask by setting ``dask='parallelized'``. See
Expand Down
12 changes: 6 additions & 6 deletions doc/user-guide/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The actual computation is controlled by a multi-processing or thread pool,
which allows Dask to take full advantage of multiple processors available on
most modern computers.

For more details on Dask, read `its documentation <https://docs.dask.org/>`__.
For more details, read the `Dask documentation <https://docs.dask.org/>`__.
Note that xarray only makes use of ``dask.array`` and ``dask.delayed``.

.. _dask.io:
Expand Down Expand Up @@ -234,7 +234,7 @@ disk.
.. note::

For more on the differences between :py:meth:`~xarray.Dataset.persist` and
:py:meth:`~xarray.Dataset.compute` see this `Stack Overflow answer <https://stackoverflow.com/questions/41806850/dask-difference-between-client-persist-and-client-compute>`_ and the `Dask documentation <https://distributed.dask.org/en/latest/manage-computation.html#dask-collections-to-futures>`_.
:py:meth:`~xarray.Dataset.compute` see this `Stack Overflow answer on the differences between client persist and client compute <https://stackoverflow.com/questions/41806850/dask-difference-between-client-persist-and-client-compute>`_ and the `Dask documentation <https://distributed.dask.org/en/latest/manage-computation.html#dask-collections-to-futures>`_.

For performance you may wish to consider chunk sizes. The correct choice of
chunk size depends both on your data and on the operations you want to perform.
Expand Down Expand Up @@ -549,7 +549,7 @@ larger chunksizes.

.. tip::

Check out the dask documentation on `chunks <https://docs.dask.org/en/latest/array-chunks.html>`_.
Check out the `dask documentation of chunks <https://docs.dask.org/en/latest/array-chunks.html>`_.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a native speaker but "on" sounds better to me?



Optimization Tips
Expand All @@ -562,7 +562,7 @@ through experience:
1. Do your spatial and temporal indexing (e.g. ``.sel()`` or ``.isel()``) early in the pipeline, especially before calling ``resample()`` or ``groupby()``. Grouping and resampling triggers some computation on all the blocks, which in theory should commute with indexing, but this optimization hasn't been implemented in Dask yet. (See `Dask issue #746 <https://github.com/dask/dask/issues/746>`_).

2. More generally, ``groupby()`` is a costly operation and will perform a lot better if the ``flox`` package is installed.
See the `flox documentation <flox.readthedocs.io/>`_ for more. By default Xarray will use ``flox`` if installed.
See the `flox documentation <https://flox.readthedocs.io>`_ for more. By default Xarray will use ``flox`` if installed.

3. Save intermediate results to disk as a netCDF files (using ``to_netcdf()``) and then load them again with ``open_dataset()`` for further computations. For example, if subtracting temporal mean from a dataset, save the temporal mean to disk before subtracting. Again, in theory, Dask should be able to do the computation in a streaming fashion, but in practice this is a fail case for the Dask scheduler, because it tries to keep every chunk of an array that it computes in memory. (See `Dask issue #874 <https://github.com/dask/dask/issues/874>`_)

Expand All @@ -572,6 +572,6 @@ through experience:

6. Using the h5netcdf package by passing ``engine='h5netcdf'`` to :py:meth:`~xarray.open_mfdataset` can be quicker than the default ``engine='netcdf4'`` that uses the netCDF4 package.

7. Some dask-specific tips may be found `here <https://docs.dask.org/en/latest/array-best-practices.html>`_.
7. Find `best practices specific to Dask arrays in the documentation <https://docs.dask.org/en/latest/array-best-practices.html>`_.

8. The dask `diagnostics <https://docs.dask.org/en/latest/understanding-performance.html>`_ can be useful in identifying performance bottlenecks.
8. The `dask diagnostics <https://docs.dask.org/en/latest/understanding-performance.html>`_ can be useful in identifying performance bottlenecks.
9 changes: 3 additions & 6 deletions doc/user-guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,9 @@ the same pipeline.
.. tip::

To substantially improve the performance of GroupBy operations, particularly
with dask install the `flox <https://flox.readthedocs.io>`_ package. flox also
`extends <https://flox.readthedocs.io/en/latest/xarray.html>`_
Xarray's in-built GroupBy capabilities by allowing grouping by multiple variables,
and lazy grouping by dask arrays. Xarray will automatically use flox by default
if it is installed.

with dask `install the flox package <https://flox.readthedocs.io>`_. flox
`extends Xarray's in-built GroupBy capabilities <https://flox.readthedocs.io/en/latest/xarray.html>`_
by allowing grouping by multiple variables, and lazy grouping by dask arrays. If installed, Xarray will automatically use flox by default.

Split
~~~~~
Expand Down
12 changes: 6 additions & 6 deletions doc/user-guide/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ In this example, the selected is a subpart of the array
in the range '2000-01-01':'2000-01-02' along the first coordinate `time`
and with 'IA' value from the second coordinate `space`.

You can perform any of the label indexing operations `supported by pandas`__,
You can perform any of the `label indexing operations supported by pandas`__,
including indexing with individual, slices and lists/arrays of labels, as well as
indexing with boolean arrays. Like pandas, label based indexing in xarray is
*inclusive* of both the start and stop bounds.
Expand Down Expand Up @@ -140,14 +140,14 @@ use them explicitly to slice data. There are two ways to do this:

The arguments to these methods can be any objects that could index the array
along the dimension given by the keyword, e.g., labels for an individual value,
Python :py:class:`slice` objects or 1-dimensional arrays.
:py:class:`Python slice` objects or 1-dimensional arrays.


.. note::

We would love to be able to do indexing with labeled dimension names inside
brackets, but unfortunately, Python `does yet not support`__ indexing with
keyword arguments like ``da[space=0]``
brackets, but unfortunately, `Python does not yet support indexing with
keyword arguments`__ like ``da[space=0]``

__ https://legacy.python.org/dev/peps/pep-0472/

Expand Down Expand Up @@ -372,12 +372,12 @@ indexers' dimension:
ind = xr.DataArray([[0, 1], [0, 1]], dims=["a", "b"])
da[ind]

Similar to how NumPy's `advanced indexing`_ works, vectorized
Similar to how `NumPy's advanced indexing`_ works, vectorized
indexing for xarray is based on our
:ref:`broadcasting rules <compute.broadcasting>`.
See :ref:`indexing.rules` for the complete specification.

.. _advanced indexing: https://numpy.org/doc/stable/reference/arrays.indexing.html
.. _NumPy's advanced indexing: https://numpy.org/doc/stable/reference/arrays.indexing.html

Vectorized indexing also works with ``isel``, ``loc``, and ``sel``:

Expand Down
16 changes: 8 additions & 8 deletions doc/user-guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -684,9 +684,9 @@ instance and pass this, as follows:
Zarr Compressors and Filters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are many different options for compression and filtering possible with
zarr. These are described in the
`zarr documentation <https://zarr.readthedocs.io/en/stable/tutorial.html#compressors>`_.
There are many different `options for compression and filtering possible with
zarr <https://zarr.readthedocs.io/en/stable/tutorial.html#compressors>`_.

These options can be passed to the ``to_zarr`` method as variable encoding.
For example:

Expand Down Expand Up @@ -720,7 +720,7 @@ As of xarray version 0.18, xarray by default uses a feature called
*consolidated metadata*, storing all metadata for the entire dataset with a
single key (by default called ``.zmetadata``). This typically drastically speeds
up opening the store. (For more information on this feature, consult the
`zarr docs <https://zarr.readthedocs.io/en/latest/tutorial.html#consolidating-metadata>`_.)
`zarr docs on consolidating metadata <https://zarr.readthedocs.io/en/latest/tutorial.html#consolidating-metadata>`_.)

By default, xarray writes consolidated metadata and attempts to read stores
with consolidated metadata, falling back to use non-consolidated metadata for
Expand Down Expand Up @@ -1165,7 +1165,7 @@ rasterio is installed. Here is an example of how to use

Deprecated in favor of rioxarray.
For information about transitioning, see:
https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html
`rioxarray getting started docs<https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html>``

.. ipython::
:verbatim:
Expand Down Expand Up @@ -1277,7 +1277,7 @@ Formats supported by PyNIO

.. warning::

The PyNIO backend is deprecated_. PyNIO is no longer maintained_. See
The `PyNIO backend is deprecated`_. `PyNIO is no longer maintained`_.

Xarray can also read GRIB, HDF4 and other file formats supported by PyNIO_,
if PyNIO is installed. To use PyNIO to read such files, supply
Expand All @@ -1288,8 +1288,8 @@ We recommend installing PyNIO via conda::
conda install -c conda-forge pynio

.. _PyNIO: https://www.pyngl.ucar.edu/Nio.shtml
.. _deprecated: https://github.com/pydata/xarray/issues/4491
.. _maintained: https://github.com/NCAR/pynio/issues/53
.. _PyNIO backend is deprecated: https://github.com/pydata/xarray/issues/4491
.. _PyNIO is no longer maintained: https://github.com/NCAR/pynio/issues/53

.. _io.PseudoNetCDF:

Expand Down
6 changes: 3 additions & 3 deletions doc/user-guide/pandas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Working with pandas
One of the most important features of xarray is the ability to convert to and
from :py:mod:`pandas` objects to interact with the rest of the PyData
ecosystem. For example, for plotting labeled data, we highly recommend
using the visualization `built in to pandas itself`__ or provided by the pandas
using the `visualization built in to pandas itself`__ or provided by the pandas
aware libraries such as `Seaborn`__.

__ https://pandas.pydata.org/pandas-docs/stable/visualization.html
Expand Down Expand Up @@ -168,7 +168,7 @@ multi-dimensional arrays to xarray.
Xarray has most of ``Panel``'s features, a more explicit API (particularly around
indexing), and the ability to scale to >3 dimensions with the same interface.

As discussed :ref:`elsewhere <data structures>` in the docs, there are two primary data structures in
As discussed in the :ref:`data structures section of the docs <data structures>`, there are two primary data structures in
xarray: ``DataArray`` and ``Dataset``. You can imagine a ``DataArray`` as a
n-dimensional pandas ``Series`` (i.e. a single typed array), and a ``Dataset``
as the ``DataFrame`` equivalent (i.e. a dict of aligned ``DataArray`` objects).
Expand Down Expand Up @@ -240,6 +240,6 @@ While the xarray docs are relatively complete, a few items stand out for Panel u
the Person dimension of a Dataset of Person x Score x Time.

While xarray may take some getting used to, it's worth it! If anything is unclear,
please post an issue on `GitHub <https://github.com/pydata/xarray>`__ or
please `post an issue on GitHub <https://github.com/pydata/xarray>`__ or
`StackOverflow <https://stackoverflow.com/questions/tagged/python-xarray>`__,
and we'll endeavor to respond to the specific case or improve the general docs.
4 changes: 2 additions & 2 deletions doc/user-guide/time-series.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,10 +108,10 @@ For more details, read the pandas documentation and the section on :ref:`datetim
Datetime components
-------------------

Similar `to pandas`_, the components of datetime objects contained in a
Similar to `pandas accessors`_, the components of datetime objects contained in a
given ``DataArray`` can be quickly computed using a special ``.dt`` accessor.

.. _to pandas: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors
.. _pandas accessors: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors

.. ipython:: python

Expand Down
6 changes: 3 additions & 3 deletions doc/user-guide/weather-climate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Weather and climate data

import xarray as xr

Xarray can leverage metadata that follows the `Climate and Forecast (CF) conventions`_ if present. Examples include automatic labelling of plots with descriptive names and units if proper metadata is present (see :ref:`plotting`) and support for non-standard calendars used in climate science through the ``cftime`` module (see :ref:`CFTimeIndex`). There are also a number of geosciences-focused projects that build on xarray (see :ref:`ecosystem`).
Xarray can leverage metadata that follows the `Climate and Forecast (CF) conventions`_ if present. Examples include :ref:`automatic labelling of plots<plotting>` with descriptive names and units if proper metadata is present and support for non-standard calendars used in climate science through the ``cftime`` module(Explained in the :ref:`CFTimeIndex` section). There are also a number of :ref:`geosciences-focused projects that build on xarray<ecosystem>`.

.. _Climate and Forecast (CF) conventions: https://cfconventions.org

Expand Down Expand Up @@ -49,10 +49,10 @@ variable with the attribute, rather than with the dimensions.
CF-compliant coordinate variables
---------------------------------

`MetPy`_ adds a ``metpy`` accessor that allows accessing coordinates with appropriate CF metadata using generic names ``x``, ``y``, ``vertical`` and ``time``. There is also a `cartopy_crs` attribute that provides projection information, parsed from the appropriate CF metadata, as a `Cartopy`_ projection object. See `their documentation`_ for more information.
`MetPy`_ adds a ``metpy`` accessor that allows accessing coordinates with appropriate CF metadata using generic names ``x``, ``y``, ``vertical`` and ``time``. There is also a `cartopy_crs` attribute that provides projection information, parsed from the appropriate CF metadata, as a `Cartopy`_ projection object. See the `metpy documentation`_ for more information.

.. _`MetPy`: https://unidata.github.io/MetPy/dev/index.html
.. _`their documentation`: https://unidata.github.io/MetPy/dev/tutorials/xarray_tutorial.html#coordinates
.. _`metpy documentation`: https://unidata.github.io/MetPy/dev/tutorials/xarray_tutorial.html#coordinates
.. _`Cartopy`: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html

.. _CFTimeIndex:
Expand Down