diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index 949b6bd475319..62e582dffae47 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -1197,8 +1197,6 @@ submitting a pull request.
For more, see the `pytest `_ documentation.
- .. versionadded:: 0.20.0
-
Furthermore one can run
.. code-block:: python
diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index 36a7166f350e5..9b97aa25a9240 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -172,8 +172,6 @@ You are highly encouraged to install both libraries. See the section
These are both enabled to be used by default, you can control this by setting the options:
-.. versionadded:: 0.20.0
-
.. code-block:: python
pd.set_option('compute.use_bottleneck', False)
@@ -891,8 +889,6 @@ functionality.
Aggregation API
~~~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
This API is similar across pandas objects, see :ref:`groupby API `, the
:ref:`window functions API `, and the :ref:`resample API `.
@@ -1030,8 +1026,6 @@ to the built in :ref:`describe function `.
Transform API
~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
The :meth:`~DataFrame.transform` method returns an object that is indexed the same (same size)
as the original. This API allows you to provide *multiple* operations at the same
time rather than one-by-one. Its API is quite similar to the ``.agg`` API.
diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index 4949dd580414f..c6eadd2adadce 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -206,8 +206,6 @@ highly performant. If you want to see only the used levels, you can use the
To reconstruct the ``MultiIndex`` with only the used levels, the
:meth:`~MultiIndex.remove_unused_levels` method may be used.
-.. versionadded:: 0.20.0
-
.. ipython:: python
new_mi = df[['foo', 'qux']].columns.remove_unused_levels()
@@ -928,8 +926,6 @@ If you need integer based selection, you should use ``iloc``:
IntervalIndex
~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
:class:`IntervalIndex` together with its own dtype, :class:`~pandas.api.types.IntervalDtype`
as well as the :class:`Interval` scalar type, allow first-class support in pandas
for interval notation.
diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index 8ca96ba0daa5e..6651f656ae45d 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -874,8 +874,6 @@ The below raises ``TypeError`` because the categories are ordered and not identi
Out[3]:
TypeError: to union ordered Categoricals, all categories must be the same
-.. versionadded:: 0.20.0
-
Ordered categoricals with different categories or orderings can be combined by
using the ``ignore_ordered=True`` argument.
diff --git a/doc/source/user_guide/computation.rst b/doc/source/user_guide/computation.rst
index 4beac5e035efc..bc00cd7f13e13 100644
--- a/doc/source/user_guide/computation.rst
+++ b/doc/source/user_guide/computation.rst
@@ -471,8 +471,6 @@ default of the index) in a DataFrame.
Rolling window endpoints
~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
The inclusion of the interval endpoints in rolling window calculations can be specified with the ``closed``
parameter:
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 141d1708d882d..8cd229070e365 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -311,8 +311,6 @@ Grouping with multiple levels is supported.
s
s.groupby(level=['first', 'second']).sum()
-.. versionadded:: 0.20
-
Index level names may be supplied as keys.
.. ipython:: python
@@ -353,8 +351,6 @@ Index levels may also be specified by name.
df.groupby([pd.Grouper(level='second'), 'A']).sum()
-.. versionadded:: 0.20
-
Index level names may be specified as keys directly to ``groupby``.
.. ipython:: python
@@ -1274,8 +1270,6 @@ To see the order in which each row appears within its group, use the
Enumerate groups
~~~~~~~~~~~~~~~~
-.. versionadded:: 0.20.2
-
To see the ordering of the groups (as opposed to the order of rows
within a group given by ``cumcount``) you can use
:meth:`~pandas.core.groupby.DataFrameGroupBy.ngroup`.
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 6b23c814843e1..173bcf7537154 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -163,9 +163,6 @@ dtype : Type name or dict of column -> type, default ``None``
(unsupported with ``engine='python'``). Use `str` or `object` together
with suitable ``na_values`` settings to preserve and
not interpret dtype.
-
- .. versionadded:: 0.20.0 support for the Python parser.
-
engine : {``'c'``, ``'python'``}
Parser engine to use. The C engine is faster while the Python engine is
currently more feature-complete.
@@ -417,10 +414,6 @@ However, if you wanted for all the data to be coerced, no matter the type, then
using the ``converters`` argument of :func:`~pandas.read_csv` would certainly be
worth trying.
- .. versionadded:: 0.20.0 support for the Python parser.
-
- The ``dtype`` option is supported by the 'python' engine.
-
.. note::
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
@@ -616,8 +609,6 @@ Filtering columns (``usecols``)
The ``usecols`` argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
-.. versionadded:: 0.20.0 support for callable `usecols` arguments
-
.. ipython:: python
data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
@@ -1447,8 +1438,6 @@ is whitespace).
df = pd.read_fwf('bar.csv', header=None, index_col=0)
df
-.. versionadded:: 0.20.0
-
``read_fwf`` supports the ``dtype`` parameter for specifying the types of
parsed columns to be different from the inferred type.
@@ -2221,8 +2210,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
Table schema
''''''''''''
-.. versionadded:: 0.20.0
-
`Table Schema`_ is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient ``table`` to build
@@ -3071,8 +3058,6 @@ missing data to recover integer dtype:
Dtype specifications
++++++++++++++++++++
-.. versionadded:: 0.20
-
As an alternative to converters, the type for an entire column can
be specified using the `dtype` keyword, which takes a dictionary
mapping column names to types. To interpret data with
@@ -3345,8 +3330,6 @@ any pickled pandas object (or any other pickled object) from file:
Compressed pickle files
'''''''''''''''''''''''
-.. versionadded:: 0.20.0
-
:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
The ``zip`` file format only supports reading and must contain only one data file
@@ -4323,8 +4306,6 @@ control compression: ``complevel`` and ``complib``.
- `bzip2 `_: Good compression rates.
- `blosc `_: Fast compression and decompression.
- .. versionadded:: 0.20.2
-
Support for alternative blosc compressors:
- `blosc:blosclz `_ This is the
@@ -4651,8 +4632,6 @@ Performance
Feather
-------
-.. versionadded:: 0.20.0
-
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst
index 4c0d3b75a4f79..7bedc9515abb2 100644
--- a/doc/source/user_guide/merging.rst
+++ b/doc/source/user_guide/merging.rst
@@ -843,8 +843,6 @@ resulting dtype will be upcast.
pd.merge(left, right, how='outer', on='key')
pd.merge(left, right, how='outer', on='key').dtypes
-.. versionadded:: 0.20.0
-
Merging will preserve ``category`` dtypes of the mergands. See also the section on :ref:`categoricals `.
The left frame.
diff --git a/doc/source/user_guide/options.rst b/doc/source/user_guide/options.rst
index a6491c6645613..5817efb31814e 100644
--- a/doc/source/user_guide/options.rst
+++ b/doc/source/user_guide/options.rst
@@ -561,8 +561,6 @@ However, setting this option incorrectly for your terminal will cause these char
Table schema display
--------------------
-.. versionadded:: 0.20.0
-
``DataFrame`` and ``Series`` will publish a Table Schema representation
by default. False by default, this can be enabled globally with the
``display.html.table_schema`` option:
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index b2ee252495f23..8583a9312b690 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -539,8 +539,6 @@ Alternatively we can specify custom bin-edges:
c = pd.cut(ages, bins=[0, 18, 35, 70])
c
-.. versionadded:: 0.20.0
-
If the ``bins`` keyword is an ``IntervalIndex``, then these will be
used to bin the passed data.::
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index 789ff2a65355b..d521c745ccfe5 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -228,8 +228,6 @@ and ``repl`` must be strings:
dollars.str.replace(r'-\$', '-')
dollars.str.replace('-$', '-', regex=False)
-.. versionadded:: 0.20.0
-
The ``replace`` method can also take a callable as replacement. It is called
on every ``pat`` using :func:`re.sub`. The callable should expect one
positional argument (a regex object) and return a string.
@@ -254,8 +252,6 @@ positional argument (a regex object) and return a string.
pd.Series(['Foo Bar Baz', np.nan],
dtype="string").str.replace(pat, repl)
-.. versionadded:: 0.20.0
-
The ``replace`` method also accepts a compiled regular expression object
from :func:`re.compile` as a pattern. All flags should be included in the
compiled regular expression object.
diff --git a/doc/source/user_guide/timedeltas.rst b/doc/source/user_guide/timedeltas.rst
index 3e46140d79b8e..3439a0a4c13c7 100644
--- a/doc/source/user_guide/timedeltas.rst
+++ b/doc/source/user_guide/timedeltas.rst
@@ -327,8 +327,6 @@ similarly to the ``Series``. These are the *displayed* values of the ``Timedelta
You can convert a ``Timedelta`` to an `ISO 8601 Duration`_ string with the
``.isoformat`` method
-.. versionadded:: 0.20.0
-
.. ipython:: python
pd.Timedelta(days=6, minutes=50, seconds=3,
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index 0894edd69c2ae..17b02374050d2 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -376,8 +376,6 @@ We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by
Using the ``origin`` Parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
Using the ``origin`` parameter, one can specify an alternative starting point for creation
of a ``DatetimeIndex``. For example, to use 1960-01-01 as the starting date:
diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index fa16b2f216610..609969b666726 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -1247,8 +1247,6 @@ in ``pandas.plotting.plot_params`` can be used in a `with statement`:
Automatic date tick adjustment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.20.0
-
``TimedeltaIndex`` now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 3c7ec70fb1f88..6a3f20928f64b 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -191,8 +191,6 @@ cdef class Interval(IntervalMixin):
"""
Immutable object implementing an Interval, a bounded slice-like interval.
- .. versionadded:: 0.20.0
-
Parameters
----------
left : orderable scalar
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 3d267b0114695..8435f1cd7d732 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1157,8 +1157,6 @@ cdef class _Timedelta(timedelta):
``P[n]Y[n]M[n]DT[n]H[n]M[n]S``, where the ``[n]`` s are replaced by the
values. See https://en.wikipedia.org/wiki/ISO_8601#Durations.
- .. versionadded:: 0.20.0
-
Returns
-------
formatted : str
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index bd1ed0bb7d318..f2176f573207c 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -199,8 +199,6 @@ def union_categoricals(to_union, sort_categories=False, ignore_order=False):
If true, the ordered attribute of the Categoricals will be ignored.
Results in an unordered categorical.
- .. versionadded:: 0.20.0
-
Returns
-------
result : Categorical
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index 461b5cc6232cd..e69e703f3a96c 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -162,8 +162,6 @@ def is_file_like(obj):
Note: file-like objects must be iterable, but
iterable objects need not be file-like.
- .. versionadded:: 0.20.0
-
Parameters
----------
obj : The object to check
@@ -281,8 +279,6 @@ def is_nested_list_like(obj):
Check if the object is list-like, and that all of its elements
are also list-like.
- .. versionadded:: 0.20.0
-
Parameters
----------
obj : The object to check
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c90bf4ba7151f..92f2528788565 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2082,8 +2082,6 @@ def to_feather(self, fname):
"""
Write out the binary feather-format for DataFrames.
- .. versionadded:: 0.20.0
-
Parameters
----------
fname : str
@@ -7870,8 +7868,6 @@ def nunique(self, axis=0, dropna=True):
Return Series with number of distinct observations. Can ignore NaN
values.
- .. versionadded:: 0.20.0
-
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a300748ee5bc8..61af22c6e92b3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -897,8 +897,6 @@ def squeeze(self, axis=None):
A specific axis to squeeze. By default, all length-1 axes are
squeezed.
- .. versionadded:: 0.20.0
-
Returns
-------
DataFrame, Series, or scalar
@@ -2163,8 +2161,6 @@ def _repr_data_resource_(self):
Specifies the one-based bottommost row and rightmost column that
is to be frozen.
- .. versionadded:: 0.20.0.
-
See Also
--------
to_csv : Write DataFrame to a comma-separated values (csv) file.
@@ -2756,8 +2752,6 @@ def to_pickle(self, path, compression="infer", protocol=pickle.HIGHEST_PROTOCOL)
default 'infer'
A string representing the compression to use in the output file. By
default, infers from the file extension in specified path.
-
- .. versionadded:: 0.20.0
protocol : int
Int which indicates which protocol should be used by the pickler,
default HIGHEST_PROTOCOL (see [1]_ paragraph 12.1.2). The possible
@@ -3032,22 +3026,15 @@ def to_latex(
multicolumn : bool, default True
Use \multicolumn to enhance MultiIndex columns.
The default will be read from the config module.
-
- .. versionadded:: 0.20.0
multicolumn_format : str, default 'l'
The alignment for multicolumns, similar to `column_format`
The default will be read from the config module.
-
- .. versionadded:: 0.20.0
multirow : bool, default False
Use \multirow to enhance MultiIndex rows. Requires adding a
\usepackage{multirow} to your LaTeX preamble. Will print
centered labels (instead of top-aligned) across the contained
rows, separating groups via clines. The default will be read
from the pandas config module.
-
- .. versionadded:: 0.20.0
-
caption : str, optional
The LaTeX caption to be placed inside ``\caption{}`` in the output.
@@ -5133,8 +5120,6 @@ def pipe(self, func, *args, **kwargs):
Call ``func`` on self producing a %(klass)s with transformed values
and that has the same axis length as self.
- .. versionadded:: 0.20.0
-
Parameters
----------
func : function, str, list or dict
@@ -5805,8 +5790,6 @@ def astype(self, dtype, copy=True, errors="raise"):
- ``raise`` : allow exceptions to be raised
- ``ignore`` : suppress exceptions. On error return original object.
- .. versionadded:: 0.20.0
-
Returns
-------
casted : same type as caller
@@ -7946,8 +7929,6 @@ def asfreq(self, freq, method=None, how=None, normalize=False, fill_value=None):
Value to use for missing values, applied during upsampling (note
this does not fill NaNs that already were present).
- .. versionadded:: 0.20.0
-
Returns
-------
converted : same type as caller
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index a78857423e7e0..5c7c56e2a31df 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1692,8 +1692,6 @@ def nunique(self, dropna=True):
Return DataFrame with number of distinct observations per group for
each column.
- .. versionadded:: 0.20.0
-
Parameters
----------
dropna : bool, default True
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f622480cfe4b7..f88f2e21bd595 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1942,8 +1942,6 @@ def ngroup(self, ascending=True):
would be seen when iterating over the groupby object, not the
order they are first observed.
- .. versionadded:: 0.20.2
-
Parameters
----------
ascending : bool, default True
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9d6487f7a8ae4..4c15e4b26ed46 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1897,8 +1897,6 @@ def isna(self):
empty strings `''` or :attr:`numpy.inf` are not considered NA values
(unless you set ``pandas.options.mode.use_inf_as_na = True``).
- .. versionadded:: 0.20.0
-
Returns
-------
numpy.ndarray
@@ -1956,8 +1954,6 @@ def notna(self):
NA values, such as None or :attr:`numpy.NaN`, get mapped to ``False``
values.
- .. versionadded:: 0.20.0
-
Returns
-------
numpy.ndarray
@@ -3420,8 +3416,6 @@ def _reindex_non_unique(self, target):
Sort the join keys lexicographically in the result Index. If False,
the order of the join keys depends on the join type (how keyword)
- .. versionadded:: 0.20.0
-
Returns
-------
join_index, (left_indexer, right_indexer)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index fda5c78a61e53..74dbcd4067ec0 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1823,8 +1823,6 @@ def _lexsort_depth(self) -> int:
def _sort_levels_monotonic(self):
"""
- .. versionadded:: 0.20.0
-
This is an *internal* function.
Create a new MultiIndex from the current to monotonically sorted
@@ -1901,8 +1899,6 @@ def remove_unused_levels(self):
appearance, meaning the same .values and ordering. It will also
be .equals() to the original.
- .. versionadded:: 0.20.0
-
Returns
-------
MultiIndex
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index d4ae3767f6157..13cb0f9aed303 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -822,8 +822,6 @@ def asfreq(self, fill_value=None):
Value to use for missing values, applied during upsampling (note
this does not fill NaNs that already were present).
- .. versionadded:: 0.20.0
-
Returns
-------
DataFrame or Series
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 6f2e264f1a4d0..c85050bc4232b 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -220,9 +220,6 @@ def wide_to_long(df, stubnames, i, j, sep="", suffix=r"\d+"):
in the wide format, to be stripped from the names in the long format.
For example, if your column names are A-suffix1, A-suffix2, you
can strip the hyphen by specifying `sep='-'`
-
- .. versionadded:: 0.20.0
-
suffix : str, default '\\d+'
A regular expression capturing the wanted suffixes. '\\d+' captures
numeric suffixes. Suffixes with no numbers could be specified with the
@@ -231,8 +228,6 @@ def wide_to_long(df, stubnames, i, j, sep="", suffix=r"\d+"):
A-one, B-two,.., and you have an unrelated column A-rating, you can
ignore the last one by specifying `suffix='(!?one|two)'`
- .. versionadded:: 0.20.0
-
.. versionchanged:: 0.23.0
When all suffixes are numeric, they are cast to int64/float64.
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 7bfc8153da568..7e593ddb91d3a 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -364,8 +364,6 @@ def merge_asof(
direction : 'backward' (default), 'forward', or 'nearest'
Whether to search for prior, subsequent, or closest matches.
- .. versionadded:: 0.20.0
-
Returns
-------
merged : DataFrame
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 6942a5797a7f0..2cc9f8927effb 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -300,8 +300,6 @@ def qcut(x, q, labels=None, retbins=False, precision=3, duplicates="raise"):
duplicates : {default 'raise', 'drop'}, optional
If bin edges are not unique, raise ValueError or drop non-uniques.
- .. versionadded:: 0.20.0
-
Returns
-------
out : Categorical or Series or array of integers if labels is False
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index e50da168af4d2..fcbb000acc256 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -491,18 +491,10 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
----------
pat : str or compiled regex
String can be a character sequence or regular expression.
-
- .. versionadded:: 0.20.0
- `pat` also accepts a compiled regex.
-
repl : str or callable
Replacement string or a callable. The callable is passed the regex
match object and must return a replacement string to be used.
See :func:`re.sub`.
-
- .. versionadded:: 0.20.0
- `repl` also accepts a callable.
-
n : int, default -1 (all)
Number of replacements to make from start.
case : bool, default None
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 7b136fa29ecea..17022a4455e82 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -647,8 +647,6 @@ def to_datetime(
at noon on January 1, 4713 BC.
- If Timestamp convertible, origin is set to Timestamp identified by
origin.
-
- .. versionadded:: 0.20.0
cache : bool, default True
If True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index ca5279e93f678..e3617d53b000a 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -76,8 +76,6 @@ def hash_pandas_object(
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
- .. versionadded:: 0.20.0
-
Returns
-------
Series of uint64, same length as the object
@@ -146,8 +144,6 @@ def hash_tuples(vals, encoding="utf8", hash_key=None):
"""
Hash an MultiIndex / list-of-tuples efficiently
- .. versionadded:: 0.20.0
-
Parameters
----------
vals : MultiIndex, list-of-tuples, or single tuple
@@ -262,8 +258,6 @@ def hash_array(vals, encoding: str = "utf8", hash_key=None, categorize: bool = T
Whether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
- .. versionadded:: 0.20.0
-
Returns
-------
1d uint64 numpy array of hash values, same length as the vals
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 29ef2e917ae57..b33995275c02a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -649,8 +649,6 @@ class Window(_Window):
For fixed windows, defaults to 'both'. Remaining cases not implemented
for fixed windows.
- .. versionadded:: 0.20.0
-
Returns
-------
a Window or Rolling sub-classed for the particular operation
diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index a85fc8bfb1414..883af5c2e62f0 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -25,8 +25,6 @@ class UnsortedIndexError(KeyError):
"""
Error raised when attempting to get a slice of a MultiIndex,
and the index has not been lexsorted. Subclass of `KeyError`.
-
- .. versionadded:: 0.20.0
"""
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 039a0560af627..6eb1b9e950dfd 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -107,9 +107,6 @@
Use `object` to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
-
- .. versionadded:: 0.20.0
-
engine : str, default None
If io is not a buffer or path, this must be set to identify io.
Acceptable values are None, "xlrd", "openpyxl" or "odf".
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 25a6db675265d..dd6519275ad15 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -71,8 +71,6 @@ def read_feather(path, columns=None, use_threads=True):
"""
Load a feather-format object from the file path.
- .. versionadded:: 0.20.0
-
Parameters
----------
path : str, path object or file-like object
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 6b98eaca9dacc..bdd1e4bc6ff97 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -490,8 +490,6 @@ def render(self, **kwargs):
This is useful when you need to provide
additional variables for a custom template.
- .. versionadded:: 0.20
-
Returns
-------
rendered : str
@@ -1199,9 +1197,6 @@ def bar(
- 'mid' : the center of the cell is at (max-min)/2, or
if values are all negative (positive) the zero is aligned
at the right (left) of the cell.
-
- .. versionadded:: 0.20.0
-
vmin : float, optional
Minimum bar value, defining the left hand limit
of the bar drawing range, lower values are clipped to `vmin`.
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 24a255c78f3c0..cf8b9d901eda2 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -46,9 +46,6 @@ def nested_to_record(
sep : str, default '.'
Nested records will generate names separated by sep,
e.g., for sep='.', { 'foo' : { 'bar' : 0 } } -> foo.bar
-
- .. versionadded:: 0.20.0
-
level: int, optional, default: 0
The number of levels in the json string.
@@ -146,15 +143,9 @@ def json_normalize(
always present.
* 'raise' : will raise KeyError if keys listed in meta are not
always present.
-
- .. versionadded:: 0.20.0
-
sep : str, default '.'
Nested records will generate names separated by sep.
e.g., for sep='.', {'foo': {'bar': 0}} -> foo.bar.
-
- .. versionadded:: 0.20.0
-
max_level : int, default None
Max number of levels(depth of dict) to normalize.
if None, normalizes all levels.
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 4b9a52a1fb8f3..621e8e09230b7 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -23,8 +23,6 @@ def to_pickle(obj, path, compression="infer", protocol=pickle.HIGHEST_PROTOCOL):
compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
A string representing the compression to use in the output file. By
default, infers from the file extension in specified path.
-
- .. versionadded:: 0.20.0
protocol : int
Int which indicates which protocol should be used by the pickler,
default HIGHEST_PROTOCOL (see [1], paragraph 12.1.2). The possible
@@ -99,8 +97,6 @@ def read_pickle(path, compression="infer"):
or '.zip' respectively, and no decompression otherwise.
Set to None for no decompression.
- .. versionadded:: 0.20.0
-
Returns
-------
unpickled : same type as object stored in file
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 426ca9632af29..815c69bc27d7a 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -353,8 +353,6 @@ def parallel_coordinates(
Options to be passed to axvline method for vertical lines.
sort_labels : bool, default False
Sort class_column labels, useful when assigning colors.
-
- .. versionadded:: 0.20.0
**kwargs
Options to pass to matplotlib plotting method.