Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs cleanup #127

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 0 additions & 10 deletions docs/doc_yamls/run_pipeline_architecture.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,6 @@ def example_pipeline(architecture, optimizer, learning_rate):

# E.g., in shape = (N, 3, 32, 32) => out shape = (N, 10)
model = architecture.to_pytorch()
model = nn.Sequential(
nn.Conv2d(in_channels, base_channels, 3, padding=1, bias=False),
nn.BatchNorm2d(base_channels),
model,
nn.BatchNorm2d(base_channels * out_channels_factor),
nn.ReLU(inplace=True),
nn.AdaptiveAvgPool2d(1),
nn.Flatten(),
nn.Linear(base_channels * out_channels_factor, n_classes),
)
training_loss = train_model(model, optimizer, learning_rate)
evaluation_loss = evaluate_model(model)
return {"loss": evaluation_loss, "training_loss": training_loss}
2 changes: 1 addition & 1 deletion docs/reference/analyse.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The `run_status.csv` provides general run details, such as the number of sampled
## TensorBoard Integration
[TensorBoard](https://www.tensorflow.org/tensorboard) serves as a valuable tool for visualizing machine learning experiments,
offering the ability to observe losses and metrics throughout the model training process.
In NePS, we use this to show metrics of configurations during training in addition to comparisons to different hyperparameters used in the search for better diagnosis of the model.
In NePS, we use this to show metrics of configurations during training in addition to comparisons of different hyperparameters used in the search for better diagnosis of the model.

### Logging Things

Expand Down
46 changes: 1 addition & 45 deletions docs/reference/declarative_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,7 @@ and its location, enabling more flexible project structures.
```python
--8<-- "docs/doc_yamls/run_pipeline_extended.py"
```
=== "run_neps.py"
```python
import neps
# No need to define run_pipeline here. NePS loads it directly from the specified path.
neps.run(run_args="path/to/your/config.yaml")
```


#### Comprehensive YAML Configuration Template
This example showcases a more comprehensive YAML configuration, which includes not only the essential parameters
Expand All @@ -62,12 +57,6 @@ but also advanced settings for more complex setups.
```python
--8<-- "docs/doc_yamls/run_pipeline_extended.py"
```
=== "run_neps.py"
```python
import neps
# Executes the configuration specified in your YAML file
neps.run(run_args="path/to/your/config.yaml")
```

The `searcher` key used in the YAML configuration corresponds to the same keys used for selecting an optimizer directly
through `neps.run`. For a detailed list of integrated optimizers, see [here](optimizers.md#list-available-searchers)
Expand All @@ -93,11 +82,6 @@ Customize an internal NePS optimizer by specifying its parameters directly under
```python
--8<-- "docs/doc_yamls/run_pipeline.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```

For detailed information about the available optimizers and their parameters, please visit the [optimizer page](optimizers.md#list-available-searching-algorithms)

Expand All @@ -116,11 +100,6 @@ Simplify experiments with multiple optimizer settings by outsourcing the optimiz
```python
--8<-- "docs/doc_yamls/run_pipeline.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```

### Handling Large Search Spaces
Manage large search spaces by outsourcing the pipeline space configuration in a separate YAML file or for keeping track
Expand All @@ -138,12 +117,6 @@ of your experiments.
```python
--8<-- "docs/doc_yamls/run_pipeline_big_search_space.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```


### Using Architecture Search Spaces
Since the option for defining the search space via YAML is limited to HPO, grammar-based search spaces or architecture
Expand All @@ -161,11 +134,6 @@ search spaces must be loaded via a dictionary, which is then referenced in the `
```python
--8<-- "docs/doc_yamls/run_pipeline_architecture.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```


### Integrating Custom Optimizers
Expand All @@ -181,13 +149,6 @@ Note: You can still overwrite arguments via searcher_kwargs of `neps.run` like f
```python
--8<-- "docs/doc_yamls/run_pipeline.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```



### Adding Custom Hooks to Your Configuration
Define hooks in your YAML configuration to extend the functionality of your experiment.
Expand All @@ -199,8 +160,3 @@ Define hooks in your YAML configuration to extend the functionality of your expe
```python
--8<-- "docs/doc_yamls/run_pipeline_extended.py"
```
=== "run_neps.py"
```python
import neps
neps.run(run_args="path/to/your/config.yaml")
```
14 changes: 7 additions & 7 deletions docs/reference/pipeline_space.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ neps.run(.., pipeline_space=pipeline_space)


## Using your knowledge, providing a Prior
When optimizing, you can provide your own knowledge using the parameters `default=`.
By indicating a `default=` we take this to be your user prior,
When optimizing, you can provide your own knowledge using the parameter `default`.
By indicating a `default` we take this to be your user prior,
**your knowledge about where a good value for this parameter lies**.

You can also specify a `default_confidence=` to indicate how strongly you want NePS,
You can also specify a `default_confidence` to indicate how strongly you want NePS,
to focus on these, one of either `"low"`, `"medium"`, or `"high"`.

Currently the two major algorithms that exploit this in NePS are `PriorBand`
Expand All @@ -77,14 +77,14 @@ neps.run(
}
)
```
!!! warning "Must set `default=` for all parameters, if any"
!!! warning "Must set `default` for all parameters, if any"

If you specify `default=` for one parameter, you must do so for all your variables.
If you specify `default` for one parameter, you must do so for all your variables.
This will be improved in future versions.

!!! warning "Interaction with `is_fidelity`"

If you specify `is_fidelity=True` for one parameter, the `default=` and `default_confidence=` are ignored.
If you specify `is_fidelity=True` for one parameter, the `default` and `default_confidence` are ignored.
This will be dissallowed in future versions.

## Defining a pipeline space using YAML
Expand Down Expand Up @@ -131,7 +131,7 @@ If none of these hold, an error will be raised.

## Using ConfigSpace

For users familiar with the [`ConfigSpace`](https://automl.github.io/ConfigSpace/main/) library,
For users familiar with the [`ConfigSpace`](https://automl.github.io/ConfigSpace/latest/) library,
can also define the `pipeline_space` through `ConfigurationSpace()`

```python
Expand Down
2 changes: 1 addition & 1 deletion neps/search_spaces/hyperparameters/categorical.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def __init__(
default: default value for the hyperparameter, must be in `choices=`
if provided.
default_confidence: confidence score for the default value, used when
condsider prior based optimization.
considering prior based optimization.
"""
choices = list(choices)
if len(choices) <= 1:
Expand Down
4 changes: 2 additions & 2 deletions neps/search_spaces/search_space.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ def has_fidelity(self) -> bool:
def compute_prior(self, *, log: bool = False, ignore_fidelity: bool = False) -> float:
"""Compute the prior probability of the search space.

This is better know as the `pdf` of the configuraiton in the search space, or a
This is better know as the `pdf` of the configuration in the search space, or a
relative measure of how likely this configuration is under the search space.

Args:
Expand Down Expand Up @@ -480,7 +480,7 @@ def get_normalized_hp_categories(
*,
ignore_fidelity: bool = False,
) -> dict[Literal["continuous", "categorical", "graphs"], list[Any]]:
"""Get the normalized values for each hyperparameter in the configuraiton.
"""Get the normalized values for each hyperparameter in the configuration.

Args:
ignore_fidelity: Whether to ignore the fidelity parameter when getting the
Expand Down
Loading