Skip to content

DOC improve instance hardness threshold user guide #1029

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 29 additions & 9 deletions doc/under_sampling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -365,12 +365,32 @@ and the output a 3 nearest neighbors classifier. The class can be used as::

.. _instance_hardness_threshold:

Additional undersampling techniques
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user guide is divided into controlled undersampling (RUS and nearmiss) and cleaning (all the others). I felt we needed another headline for IHT

-----------------------------------

Instance hardness threshold
^^^^^^^^^^^^^^^^^^^^^^^^^^^

:class:`InstanceHardnessThreshold` is a specific algorithm in which a
classifier is trained on the data and the samples with lower probabilities are
removed :cite:`smith2014instance`. The class can be used as::
**Instance Hardness** is a measure of how difficult it is to classify an instance or
observation correctly. In other words, hard instances are observations that are hard to
classify correctly.

Fundamentally, instances that are hard to classify correctly are those for which the
learning algorithm or classifier produces a low probability of predicting the correct
class label.

If we removed these hard instances from the dataset, the logic goes, we would help the
classifier better identify the different classes :cite:`smith2014instance`.

:class:`InstanceHardnessThreshold` trains a classifier on the data and then removes the
samples with lower probabilities :cite:`smith2014instance`. Or in other words, it
retains the observations with the higher class probabilities.

In our implementation, :class:`InstanceHardnessThreshold` is (almost) a controlled
under-sampling method: it will retain a specific number of observations of the target
class(es), which is specified by the user (see caveat below).

The class can be used as::

>>> from sklearn.linear_model import LogisticRegression
>>> from imblearn.under_sampling import InstanceHardnessThreshold
Expand All @@ -381,18 +401,18 @@ removed :cite:`smith2014instance`. The class can be used as::
>>> print(sorted(Counter(y_resampled).items()))
[(0, 64), (1, 64), (2, 64)]

This class has 2 important parameters. ``estimator`` will accept any
scikit-learn classifier which has a method ``predict_proba``. The classifier
training is performed using a cross-validation and the parameter ``cv`` can set
the number of folds to use.
:class:`InstanceHardnessThreshold` has 2 important parameters. The parameter
``estimator`` accepts any scikit-learn classifier with a method ``predict_proba``.
This classifier will be used to identify the hard instances. The training is performed
with cross-validation which can be specified through the parameter ``cv`.

.. note::

:class:`InstanceHardnessThreshold` could almost be considered as a
controlled under-sampling method. However, due to the probability outputs, it
is not always possible to get a specific number of samples.
is not always possible to get the specified number of samples.

The figure below gives another examples on some toy data.
The figure below shows examples of instance hardness undersampling on a toy dataset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is FunctionSampler in the image below? random undersampling? Would be good to make it clear so the user knows what they are looking at in the plot

.. image:: ./auto_examples/under-sampling/images/sphx_glr_plot_comparison_under_sampling_006.png
:target: ./auto_examples/under-sampling/plot_comparison_under_sampling.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class InstanceHardnessThreshold(BaseUnderSampler):
----------
sampling_strategy_ : dict
Dictionary containing the information to sample the dataset. The keys
corresponds to the class labels from which to sample and the values
correspond to the class labels from which to sample and the values
are the number of samples to sample.

estimator_ : estimator object
Expand Down