diff --git a/doc/contributing.rst b/doc/contributing.rst index a69a8017110..07938f23c9f 100644 --- a/doc/contributing.rst +++ b/doc/contributing.rst @@ -35,8 +35,8 @@ Bug reports and enhancement requests Bug reports are an important part of making *xarray* more stable. Having a complete bug report will allow others to reproduce the bug and provide insight into fixing. See -`this stackoverflow article `_ for tips on -writing a good bug report. +this `stackoverflow article for tips on +writing a good bug report `_ . Trying out the bug-producing code on the *main* branch is often a worthwhile exercise to confirm that the bug still exists. It is also worth searching existing bug reports and @@ -102,7 +102,7 @@ Some great resources for learning Git: Getting started with Git ------------------------ -`GitHub has instructions `__ for installing git, +`GitHub has instructions for setting up Git `__ including installing git, setting up your SSH key, and configuring git. All these steps need to be completed before you can work seamlessly between your local repository and GitHub. @@ -238,7 +238,7 @@ To return to your root environment:: conda deactivate -See the full conda docs `here `__. +See the full `conda docs here `__. .. _contributing.documentation: @@ -277,7 +277,7 @@ Some other important things to know about the docs: - The docstrings follow the **NumPy Docstring Standard**, which is used widely in the Scientific Python community. This standard specifies the format of - the different sections of the docstring. See `this document + the different sections of the docstring. Refer to the `documentation for the Numpy docstring format `_ for a detailed explanation, or look at some of the existing functions to extend it in a similar manner. @@ -732,8 +732,8 @@ or, to use a specific Python interpreter,:: This will display stderr from the benchmarks, and use your local ``python`` that comes from your ``$PATH``. -Information on how to write a benchmark and how to use asv can be found in the -`asv documentation `_. +Learn `how to write a benchmark and how to use asv from the documentation `_ . + .. TODO: uncomment once we have a working setup @@ -752,8 +752,7 @@ GitHub issue number when adding your entry (using ``:issue:`1234```, where ``123 issue/pull request number). If your code is an enhancement, it is most likely necessary to add usage -examples to the existing documentation. This can be done following the section -regarding documentation :ref:`above `. +examples to the existing documentation. This can be done by following the :ref:`guidelines for contributing to the documentation `. .. _contributing.changes: diff --git a/doc/developers-meeting.rst b/doc/developers-meeting.rst index ff706b17af8..1c49a900f66 100644 --- a/doc/developers-meeting.rst +++ b/doc/developers-meeting.rst @@ -5,9 +5,9 @@ Xarray developers meet bi-weekly every other Wednesday. The meeting occurs on `Zoom `__. -Notes for the meeting are kept `here `__. +Find the `notes for the meeting here `__. -There is a :issue:`GitHub issue <4001>` for changes to the meeting. +There is a :issue:`GitHub issue for changes to the meeting<4001>`. You can subscribe to this calendar to be notified of changes: diff --git a/doc/user-guide/computation.rst b/doc/user-guide/computation.rst index b9a54dce1bf..f913ea41a91 100644 --- a/doc/user-guide/computation.rst +++ b/doc/user-guide/computation.rst @@ -804,7 +804,7 @@ to set ``axis=-1``. As an example, here is how we would wrap Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays nicely with tools for building vectorized functions, like :py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance -needs, consider using Numba's :doc:`vectorize and guvectorize `. +needs, consider using :doc:`Numba's vectorize and guvectorize `. In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize many functions when using dask by setting ``dask='parallelized'``. See diff --git a/doc/user-guide/dask.rst b/doc/user-guide/dask.rst index de8fcd84c8b..27e7449b7c3 100644 --- a/doc/user-guide/dask.rst +++ b/doc/user-guide/dask.rst @@ -39,7 +39,7 @@ The actual computation is controlled by a multi-processing or thread pool, which allows Dask to take full advantage of multiple processors available on most modern computers. -For more details on Dask, read `its documentation `__. +For more details, read the `Dask documentation `__. Note that xarray only makes use of ``dask.array`` and ``dask.delayed``. .. _dask.io: @@ -234,7 +234,7 @@ disk. .. note:: For more on the differences between :py:meth:`~xarray.Dataset.persist` and - :py:meth:`~xarray.Dataset.compute` see this `Stack Overflow answer `_ and the `Dask documentation `_. + :py:meth:`~xarray.Dataset.compute` see this `Stack Overflow answer on the differences between client persist and client compute `_ and the `Dask documentation `_. For performance you may wish to consider chunk sizes. The correct choice of chunk size depends both on your data and on the operations you want to perform. @@ -549,7 +549,7 @@ larger chunksizes. .. tip:: - Check out the dask documentation on `chunks `_. + Check out the `dask documentation on chunks `_. Optimization Tips @@ -562,7 +562,7 @@ through experience: 1. Do your spatial and temporal indexing (e.g. ``.sel()`` or ``.isel()``) early in the pipeline, especially before calling ``resample()`` or ``groupby()``. Grouping and resampling triggers some computation on all the blocks, which in theory should commute with indexing, but this optimization hasn't been implemented in Dask yet. (See `Dask issue #746 `_). 2. More generally, ``groupby()`` is a costly operation and will perform a lot better if the ``flox`` package is installed. - See the `flox documentation `_ for more. By default Xarray will use ``flox`` if installed. + See the `flox documentation `_ for more. By default Xarray will use ``flox`` if installed. 3. Save intermediate results to disk as a netCDF files (using ``to_netcdf()``) and then load them again with ``open_dataset()`` for further computations. For example, if subtracting temporal mean from a dataset, save the temporal mean to disk before subtracting. Again, in theory, Dask should be able to do the computation in a streaming fashion, but in practice this is a fail case for the Dask scheduler, because it tries to keep every chunk of an array that it computes in memory. (See `Dask issue #874 `_) @@ -572,6 +572,6 @@ through experience: 6. Using the h5netcdf package by passing ``engine='h5netcdf'`` to :py:meth:`~xarray.open_mfdataset` can be quicker than the default ``engine='netcdf4'`` that uses the netCDF4 package. -7. Some dask-specific tips may be found `here `_. +7. Find `best practices specific to Dask arrays in the documentation `_. -8. The dask `diagnostics `_ can be useful in identifying performance bottlenecks. +8. The `dask diagnostics `_ can be useful in identifying performance bottlenecks. diff --git a/doc/user-guide/groupby.rst b/doc/user-guide/groupby.rst index c5a071171a6..dce20dce228 100644 --- a/doc/user-guide/groupby.rst +++ b/doc/user-guide/groupby.rst @@ -25,12 +25,9 @@ the same pipeline. .. tip:: To substantially improve the performance of GroupBy operations, particularly - with dask install the `flox `_ package. flox also - `extends `_ - Xarray's in-built GroupBy capabilities by allowing grouping by multiple variables, - and lazy grouping by dask arrays. Xarray will automatically use flox by default - if it is installed. - + with dask `install the flox package `_. flox + `extends Xarray's in-built GroupBy capabilities `_ + by allowing grouping by multiple variables, and lazy grouping by dask arrays. If installed, Xarray will automatically use flox by default. Split ~~~~~ diff --git a/doc/user-guide/indexing.rst b/doc/user-guide/indexing.rst index e0337f24b9b..492316f898f 100644 --- a/doc/user-guide/indexing.rst +++ b/doc/user-guide/indexing.rst @@ -95,7 +95,7 @@ In this example, the selected is a subpart of the array in the range '2000-01-01':'2000-01-02' along the first coordinate `time` and with 'IA' value from the second coordinate `space`. -You can perform any of the label indexing operations `supported by pandas`__, +You can perform any of the `label indexing operations supported by pandas`__, including indexing with individual, slices and lists/arrays of labels, as well as indexing with boolean arrays. Like pandas, label based indexing in xarray is *inclusive* of both the start and stop bounds. @@ -140,14 +140,14 @@ use them explicitly to slice data. There are two ways to do this: The arguments to these methods can be any objects that could index the array along the dimension given by the keyword, e.g., labels for an individual value, -Python :py:class:`slice` objects or 1-dimensional arrays. +:py:class:`Python slice` objects or 1-dimensional arrays. .. note:: We would love to be able to do indexing with labeled dimension names inside - brackets, but unfortunately, Python `does yet not support`__ indexing with - keyword arguments like ``da[space=0]`` + brackets, but unfortunately, `Python does not yet support indexing with + keyword arguments`__ like ``da[space=0]`` __ https://legacy.python.org/dev/peps/pep-0472/ @@ -372,12 +372,12 @@ indexers' dimension: ind = xr.DataArray([[0, 1], [0, 1]], dims=["a", "b"]) da[ind] -Similar to how NumPy's `advanced indexing`_ works, vectorized +Similar to how `NumPy's advanced indexing`_ works, vectorized indexing for xarray is based on our :ref:`broadcasting rules `. See :ref:`indexing.rules` for the complete specification. -.. _advanced indexing: https://numpy.org/doc/stable/reference/arrays.indexing.html +.. _NumPy's advanced indexing: https://numpy.org/doc/stable/reference/arrays.indexing.html Vectorized indexing also works with ``isel``, ``loc``, and ``sel``: diff --git a/doc/user-guide/io.rst b/doc/user-guide/io.rst index 65405a85db3..5610e7829f2 100644 --- a/doc/user-guide/io.rst +++ b/doc/user-guide/io.rst @@ -684,9 +684,9 @@ instance and pass this, as follows: Zarr Compressors and Filters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -There are many different options for compression and filtering possible with -zarr. These are described in the -`zarr documentation `_. +There are many different `options for compression and filtering possible with +zarr `_. + These options can be passed to the ``to_zarr`` method as variable encoding. For example: @@ -720,7 +720,7 @@ As of xarray version 0.18, xarray by default uses a feature called *consolidated metadata*, storing all metadata for the entire dataset with a single key (by default called ``.zmetadata``). This typically drastically speeds up opening the store. (For more information on this feature, consult the -`zarr docs `_.) +`zarr docs on consolidating metadata `_.) By default, xarray writes consolidated metadata and attempts to read stores with consolidated metadata, falling back to use non-consolidated metadata for @@ -1165,7 +1165,7 @@ rasterio is installed. Here is an example of how to use Deprecated in favor of rioxarray. For information about transitioning, see: - https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html + `rioxarray getting started docs`` .. ipython:: :verbatim: @@ -1277,7 +1277,7 @@ Formats supported by PyNIO .. warning:: - The PyNIO backend is deprecated_. PyNIO is no longer maintained_. + The `PyNIO backend is deprecated`_. `PyNIO is no longer maintained`_. Xarray can also read GRIB, HDF4 and other file formats supported by PyNIO_, if PyNIO is installed. To use PyNIO to read such files, supply @@ -1288,8 +1288,8 @@ We recommend installing PyNIO via conda:: conda install -c conda-forge pynio .. _PyNIO: https://www.pyngl.ucar.edu/Nio.shtml -.. _deprecated: https://github.com/pydata/xarray/issues/4491 -.. _maintained: https://github.com/NCAR/pynio/issues/53 +.. _PyNIO backend is deprecated: https://github.com/pydata/xarray/issues/4491 +.. _PyNIO is no longer maintained: https://github.com/NCAR/pynio/issues/53 .. _io.PseudoNetCDF: diff --git a/doc/user-guide/pandas.rst b/doc/user-guide/pandas.rst index a376b0a5cb8..76349fcd371 100644 --- a/doc/user-guide/pandas.rst +++ b/doc/user-guide/pandas.rst @@ -8,7 +8,7 @@ Working with pandas One of the most important features of xarray is the ability to convert to and from :py:mod:`pandas` objects to interact with the rest of the PyData ecosystem. For example, for plotting labeled data, we highly recommend -using the visualization `built in to pandas itself`__ or provided by the pandas +using the `visualization built in to pandas itself`__ or provided by the pandas aware libraries such as `Seaborn`__. __ https://pandas.pydata.org/pandas-docs/stable/visualization.html @@ -168,7 +168,7 @@ multi-dimensional arrays to xarray. Xarray has most of ``Panel``'s features, a more explicit API (particularly around indexing), and the ability to scale to >3 dimensions with the same interface. -As discussed :ref:`elsewhere ` in the docs, there are two primary data structures in +As discussed in the :ref:`data structures section of the docs `, there are two primary data structures in xarray: ``DataArray`` and ``Dataset``. You can imagine a ``DataArray`` as a n-dimensional pandas ``Series`` (i.e. a single typed array), and a ``Dataset`` as the ``DataFrame`` equivalent (i.e. a dict of aligned ``DataArray`` objects). @@ -240,6 +240,6 @@ While the xarray docs are relatively complete, a few items stand out for Panel u the Person dimension of a Dataset of Person x Score x Time. While xarray may take some getting used to, it's worth it! If anything is unclear, -please post an issue on `GitHub `__ or +please `post an issue on GitHub `__ or `StackOverflow `__, and we'll endeavor to respond to the specific case or improve the general docs. diff --git a/doc/user-guide/time-series.rst b/doc/user-guide/time-series.rst index 90ec4cb32be..d2e15adeba7 100644 --- a/doc/user-guide/time-series.rst +++ b/doc/user-guide/time-series.rst @@ -108,10 +108,10 @@ For more details, read the pandas documentation and the section on :ref:`datetim Datetime components ------------------- -Similar `to pandas`_, the components of datetime objects contained in a +Similar to `pandas accessors`_, the components of datetime objects contained in a given ``DataArray`` can be quickly computed using a special ``.dt`` accessor. -.. _to pandas: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors +.. _pandas accessors: https://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dt-accessors .. ipython:: python diff --git a/doc/user-guide/weather-climate.rst b/doc/user-guide/weather-climate.rst index 793da9d1bdd..30876eb36bc 100644 --- a/doc/user-guide/weather-climate.rst +++ b/doc/user-guide/weather-climate.rst @@ -10,7 +10,7 @@ Weather and climate data import xarray as xr -Xarray can leverage metadata that follows the `Climate and Forecast (CF) conventions`_ if present. Examples include automatic labelling of plots with descriptive names and units if proper metadata is present (see :ref:`plotting`) and support for non-standard calendars used in climate science through the ``cftime`` module (see :ref:`CFTimeIndex`). There are also a number of geosciences-focused projects that build on xarray (see :ref:`ecosystem`). +Xarray can leverage metadata that follows the `Climate and Forecast (CF) conventions`_ if present. Examples include :ref:`automatic labelling of plots` with descriptive names and units if proper metadata is present and support for non-standard calendars used in climate science through the ``cftime`` module(Explained in the :ref:`CFTimeIndex` section). There are also a number of :ref:`geosciences-focused projects that build on xarray`. .. _Climate and Forecast (CF) conventions: https://cfconventions.org @@ -49,10 +49,10 @@ variable with the attribute, rather than with the dimensions. CF-compliant coordinate variables --------------------------------- -`MetPy`_ adds a ``metpy`` accessor that allows accessing coordinates with appropriate CF metadata using generic names ``x``, ``y``, ``vertical`` and ``time``. There is also a `cartopy_crs` attribute that provides projection information, parsed from the appropriate CF metadata, as a `Cartopy`_ projection object. See `their documentation`_ for more information. +`MetPy`_ adds a ``metpy`` accessor that allows accessing coordinates with appropriate CF metadata using generic names ``x``, ``y``, ``vertical`` and ``time``. There is also a `cartopy_crs` attribute that provides projection information, parsed from the appropriate CF metadata, as a `Cartopy`_ projection object. See the `metpy documentation`_ for more information. .. _`MetPy`: https://unidata.github.io/MetPy/dev/index.html -.. _`their documentation`: https://unidata.github.io/MetPy/dev/tutorials/xarray_tutorial.html#coordinates +.. _`metpy documentation`: https://unidata.github.io/MetPy/dev/tutorials/xarray_tutorial.html#coordinates .. _`Cartopy`: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html .. _CFTimeIndex: