Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix docstrings #177

Merged
merged 8 commits into from
Dec 8, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,19 @@ explanations of your supervised machine learning models.
FACET is composed of the following key components:

+----------------+---------------------------------------------------------------------+
| |inspect| | **Model Inspection** |
| |spacer| | **Model Inspection** |
| | |
| | FACET introduces a new algorithm to quantify dependencies and |
| |inspect| | FACET introduces a new algorithm to quantify dependencies and |
| | interactions between features in ML models. |
| | This new tool for human-explainable AI adds a new, global |
| | perspective to the observation-level explanations provided by the |
| | popular `SHAP <https://shap.readthedocs.io/en/latest/>`__ approach. |
| | To learn more about FACET’s model inspection capabilities, see the |
| | getting started example below. |
+----------------+---------------------------------------------------------------------+
| |sim| | **Model Simulation** |
| | | **Model Simulation** |
| | |
| | FACET’s model simulation algorithms use ML models for |
| |sim| | FACET’s model simulation algorithms use ML models for |
| | *virtual experiments* to help identify scenarios that optimise |
| | predicted outcomes. |
| | To quantify the uncertainty in simulations, FACET utilises a range |
Expand All @@ -30,9 +30,9 @@ FACET is composed of the following key components:
| | For an example of FACET’s bootstrap simulations, see the getting |
| | started example below. |
+----------------+---------------------------------------------------------------------+
| |pipe| | **Enhanced Machine Learning Workflow** |
| |spacer| | |
| | FACET offers an efficient and transparent machine learning |
| | | **Enhanced Machine Learning Workflow** |
| | |
| |pipe| | FACET offers an efficient and transparent machine learning |
| | workflow, enhancing |
| | `scikit-learn <https://scikit-learn.org/stable/index.html>`__'s |
| | tried and tested pipelining paradigm with new capabilities for model|
Expand Down
30 changes: 15 additions & 15 deletions src/facet/crossfit/_crossfit.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,9 @@ def __init__(
"""
:param pipeline: learner pipeline to be fitted
:param cv: the cross-validator generating the train splits
:param shuffle_features: if ``True``, shuffle column order of features for \
:param shuffle_features: if ``True``, shuffle column order of features for
every crossfit (default: ``False``)
:param random_state: optional random seed or random state for shuffling the \
:param random_state: optional random seed or random state for shuffling the
feature column order
"""
super().__init__(
Expand Down Expand Up @@ -199,10 +199,10 @@ def fit(self: T, sample: Sample, **fit_params) -> T:
Fit the underlying pipeline to the full sample, and fit clones of the pipeline
to each of the train splits generated by the cross-validator.

:param sample: the sample to fit the estimators to; if the sample specifies \
weights these are passed on to the learner as keyword argument \
:param sample: the sample to fit the estimators to; if the sample specifies
weights these are passed on to the learner as keyword argument
``sample_weight``
:param fit_params: optional fit parameters, to be passed on to the fit method \
:param fit_params: optional fit parameters, to be passed on to the fit method
of the base estimator
:return: ``self``
"""
Expand All @@ -226,11 +226,11 @@ def score(

The crossfit must already be fitted, see :meth:`.fit`

:param scoring: scoring to use to score the models (see \
:meth:`sklearn.metrics.check_scoring` for details); if the crossfit \
was fitted using sample weights, these are passed on to the scoring \
:param scoring: scoring to use to score the models (see
:meth:`sklearn.metrics.check_scoring` for details); if the crossfit
was fitted using sample weights, these are passed on to the scoring
function as keyword argument ``sample_weight``
:param train_scores: if ``True``, calculate train scores instead of test \
:param train_scores: if ``True``, calculate train scores instead of test
scores (default: ``False``)
:return: the resulting scores as a 1d numpy array
"""
Expand All @@ -249,14 +249,14 @@ def fit_score(

See :meth:`.fit` and :meth:`.score` for details.

:param sample: the sample to fit the estimators to; if the sample specifies \
weights these are passed on to the learner and scoring function as keyword \
:param sample: the sample to fit the estimators to; if the sample specifies
weights these are passed on to the learner and scoring function as keyword
argument ``sample_weight``
:param fit_params: optional fit parameters, to be passed on to the fit method \
:param fit_params: optional fit parameters, to be passed on to the fit method
of the learner
:param scoring: scoring function to use to score the models \
:param scoring: scoring function to use to score the models
(see :meth:`~sklearn.metrics.check_scoring` for details)
:param train_scores: if ``True``, calculate train scores instead of test \
:param train_scores: if ``True``, calculate train scores instead of test
scores (default: ``False``)

:return: the resulting scores
Expand All @@ -272,7 +272,7 @@ def fit_score(
def resize(self: T, n_splits: int) -> T:
"""
Reduce the size of this crossfit by removing a subset of the fits.
:param n_splits: the number of fits to keep. Must be lower, or equal to, the \
:param n_splits: the number of fits to keep. Must be lower, or equal to, the
current number of fits
:return:
"""
Expand Down
10 changes: 5 additions & 5 deletions src/facet/data/_sample.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,14 @@ def __init__(
weight_name: Optional[str] = None,
) -> None:
"""
:param observations: a table of observational data; \
:param observations: a table of observational data;
each row represents one observation
:param target_name: the name of the column representing the target \
:param target_name: the name of the column representing the target
variable; or an iterable of names representing multiple targets
:param feature_names: optional iterable of strings naming the columns that \
:param feature_names: optional iterable of strings naming the columns that
represent features; if omitted, all non-target and non-weight columns are
considered features
:param weight_name: optional name of a column representing the weight of each \
:param weight_name: optional name of a column representing the weight of each
observation
"""

Expand Down Expand Up @@ -256,7 +256,7 @@ def subsample(

:param loc: indices of observations to select
:param iloc: integer indices of observations to select
:return: copy of this sample, comprising only the observations in the given \
:return: copy of this sample, comprising only the observations in the given
rows
"""
subsample = copy(self)
Expand Down
18 changes: 9 additions & 9 deletions src/facet/inspection/_explainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,10 +110,10 @@ def __init__(
) -> None:
"""
:param model_output: (optional) override the default model output parameter
:param feature_perturbation: (optional) override the default \
:param feature_perturbation: (optional) override the default
feature_perturbation parameter
:param use_background_dataset: if ``False``, don't pass the background \
dataset on to the tree explainer even if a background dataset is passed \
:param use_background_dataset: if ``False``, don't pass the background
dataset on to the tree explainer even if a background dataset is passed
to :meth:`.make_explainer`
"""
super().__init__()
Expand Down Expand Up @@ -174,7 +174,7 @@ def make_explainer(
@inheritdoc(match="[see superclass]")
class KernelExplainerFactory(ExplainerFactory):
"""
A factory constructing class:`~shap.KernelExplainer` objects.
A factory constructing :class:`~shap.KernelExplainer` objects.
"""

def __init__(
Expand All @@ -185,12 +185,12 @@ def __init__(
) -> None:
"""
:param link: (optional) override the default link parameter
:param l1_reg: (optional) override the default l1_reg parameter of method \
:meth:`~shap.KernelExplainer.shap_values`; pass ``None`` to use the \
:param l1_reg: (optional) override the default l1_reg parameter of method
:meth:`~shap.KernelExplainer.shap_values`; pass ``None`` to use the
default value used by :meth:`~shap.KernelExplainer.shap_values`
:param data_size_limit: (optional) maximum number of observations to use as \
the background data set; larger data sets will be down-sampled using \
method :meth:`~shap.kmeans`. \
:param data_size_limit: (optional) maximum number of observations to use as
the background data set; larger data sets will be down-sampled using
method :meth:`~shap.kmeans`.
Pass ``None`` to prevent down-sampling the background data set.
"""
super().__init__()
Expand Down
Loading