-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable context for SMAC Optimizer #741
Closed
Closed
Changes from all commits
Commits
Show all changes
52 commits
Select commit
Hold shift + click to select a range
4b325b4
progress on multi fidelity SMAC interface
jsfreischuetz a59a499
more ideas
jsfreischuetz 14c80d9
SMAC updated
jsfreischuetz d2b4c9b
merge
jsfreischuetz 1d8b455
merge
jsfreischuetz 266d30c
Merge branch 'microsoft-main'
jsfreischuetz 7d57cf5
merge
jsfreischuetz 65e2170
fixes
jsfreischuetz c330be9
fix some of the tests
jsfreischuetz 891c755
fix more tests
jsfreischuetz 2693d2d
fix docstrings
jsfreischuetz ba7b975
finish mlos_core tests
jsfreischuetz 708e9a3
merge
jsfreischuetz 78fc91e
fix linter and loging error
jsfreischuetz 79b9ed0
Merge branch 'microsoft:main' into main
jsfreischuetz d2bc22c
add support for python3 3.8
jsfreischuetz 65dc1b7
Merge branch 'main' of github.com:jsfreischuetz/MLOS
jsfreischuetz c92924c
merge
jsfreischuetz f48f65f
mergE
jsfreischuetz 5469503
typing
jsfreischuetz e3c238b
fix tests
jsfreischuetz 4bc55ee
fix tests
jsfreischuetz 311175a
fix typing for 3.8
jsfreischuetz 9899089
fix another typing issue
jsfreischuetz ea391c7
another typing issue
jsfreischuetz 17a5965
another type issue
jsfreischuetz d8fe76e
import order
jsfreischuetz cd3913f
lint checker
jsfreischuetz a5f3ce7
reorder imports
jsfreischuetz 546cb35
fix more linting messages
jsfreischuetz 17ae308
linter errors
jsfreischuetz 12d725c
linter errors
jsfreischuetz ee99fd5
more linting
jsfreischuetz 57103f5
oops typo
jsfreischuetz 3d57a10
merge
jsfreischuetz 02b8b9d
more linting
jsfreischuetz 465dc69
auto format changes
jsfreischuetz 0d48d5e
merge
jsfreischuetz c19d4c7
fix some tests
jsfreischuetz 3841b42
tests working
jsfreischuetz 23b1a83
fix for multi fidelity
jsfreischuetz d047836
hopefully fix all linting issues
jsfreischuetz 96198d9
Merge branch 'microsoft:main' into main
jsfreischuetz 77b46ee
some more changes
jsfreischuetz f3773ea
Merge branch 'main' of github.com:jsfreischuetz/MLOS
jsfreischuetz f45279b
fix linter
jsfreischuetz 4f3101d
new typing
jsfreischuetz 6725d74
mypy fixups
bpkroth 9d5f973
Merge pull request #4 from bpkroth/jsfreischuetz-main
jsfreischuetz dab5c67
revert some changes
jsfreischuetz f70eacf
Merge branch 'main' of github.com:jsfreischuetz/MLOS
jsfreischuetz 8faf321
fix launch.json being ignored
jsfreischuetz File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,25 +9,23 @@ | |
|
||
import logging | ||
from abc import ABCMeta, abstractmethod | ||
from distutils.util import strtobool # pylint: disable=deprecated-module | ||
|
||
from distutils.util import strtobool # pylint: disable=deprecated-module | ||
from types import TracebackType | ||
from typing import Dict, Optional, Sequence, Tuple, Type, Union | ||
from typing_extensions import Literal | ||
|
||
from ConfigSpace import ConfigurationSpace | ||
|
||
from typing_extensions import Literal | ||
from mlos_bench.config.schemas import ConfigSchema | ||
from mlos_bench.services.base_service import Service | ||
from mlos_bench.environments.status import Status | ||
from mlos_bench.optimizers.convert_configspace import tunable_groups_to_configspace | ||
from mlos_bench.services.base_service import Service | ||
from mlos_bench.tunables.tunable import TunableValue | ||
from mlos_bench.tunables.tunable_groups import TunableGroups | ||
from mlos_bench.optimizers.convert_configspace import tunable_groups_to_configspace | ||
|
||
_LOG = logging.getLogger(__name__) | ||
|
||
|
||
class Optimizer(metaclass=ABCMeta): # pylint: disable=too-many-instance-attributes | ||
class Optimizer(metaclass=ABCMeta): # pylint: disable=too-many-instance-attributes | ||
""" | ||
An abstract interface between the benchmarking framework and mlos_core optimizers. | ||
""" | ||
|
@@ -40,11 +38,13 @@ class Optimizer(metaclass=ABCMeta): # pylint: disable=too-many-instance-attr | |
"start_with_defaults", | ||
} | ||
|
||
def __init__(self, | ||
tunables: TunableGroups, | ||
config: dict, | ||
global_config: Optional[dict] = None, | ||
service: Optional[Service] = None): | ||
def __init__( | ||
self, | ||
tunables: TunableGroups, | ||
config: dict, | ||
global_config: Optional[dict] = None, | ||
service: Optional[Service] = None, | ||
): | ||
""" | ||
Create a new optimizer for the given configuration space defined by the tunables. | ||
|
||
|
@@ -68,25 +68,30 @@ def __init__(self, | |
self._seed = int(config.get("seed", 42)) | ||
self._in_context = False | ||
|
||
experiment_id = self._global_config.get('experiment_id') | ||
experiment_id = self._global_config.get("experiment_id") | ||
self.experiment_id = str(experiment_id).strip() if experiment_id else None | ||
|
||
self._iter = 0 | ||
# If False, use the optimizer to suggest the initial configuration; | ||
# if True (default), use the already initialized values for the first iteration. | ||
self._start_with_defaults: bool = bool( | ||
strtobool(str(self._config.pop('start_with_defaults', True)))) | ||
self._max_iter = int(self._config.pop('max_suggestions', 100)) | ||
strtobool(str(self._config.pop("start_with_defaults", True))) | ||
) | ||
self._max_iter = int(self._config.pop("max_suggestions", 100)) | ||
|
||
opt_targets: Dict[str, str] = self._config.pop('optimization_targets', {'score': 'min'}) | ||
opt_targets: Dict[str, str] = self._config.pop( | ||
"optimization_targets", {"score": "min"} | ||
) | ||
self._opt_targets: Dict[str, Literal[1, -1]] = {} | ||
for (opt_target, opt_dir) in opt_targets.items(): | ||
for opt_target, opt_dir in opt_targets.items(): | ||
if opt_dir == "min": | ||
self._opt_targets[opt_target] = 1 | ||
elif opt_dir == "max": | ||
self._opt_targets[opt_target] = -1 | ||
else: | ||
raise ValueError(f"Invalid optimization direction: {opt_dir} for {opt_target}") | ||
raise ValueError( | ||
f"Invalid optimization direction: {opt_dir} for {opt_target}" | ||
) | ||
|
||
def _validate_json_config(self, config: dict) -> None: | ||
""" | ||
|
@@ -108,7 +113,7 @@ def __repr__(self) -> str: | |
) | ||
return f"{self.name}({opt_targets},config={self._config})" | ||
|
||
def __enter__(self) -> 'Optimizer': | ||
def __enter__(self) -> "Optimizer": | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's a whole bunch of style-only changes in here that makes reviewing a little difficult. Can you please revert those, or separate them out to a different PR? |
||
""" | ||
Enter the optimizer's context. | ||
""" | ||
|
@@ -117,9 +122,12 @@ def __enter__(self) -> 'Optimizer': | |
self._in_context = True | ||
return self | ||
|
||
def __exit__(self, ex_type: Optional[Type[BaseException]], | ||
ex_val: Optional[BaseException], | ||
ex_tb: Optional[TracebackType]) -> Literal[False]: | ||
def __exit__( | ||
self, | ||
ex_type: Optional[Type[BaseException]], | ||
ex_val: Optional[BaseException], | ||
ex_tb: Optional[TracebackType], | ||
) -> Literal[False]: | ||
""" | ||
Exit the context of the optimizer. | ||
""" | ||
|
@@ -191,7 +199,9 @@ def config_space(self) -> ConfigurationSpace: | |
The ConfigSpace representation of the tunable parameters. | ||
""" | ||
if self._config_space is None: | ||
self._config_space = tunable_groups_to_configspace(self._tunables, self._seed) | ||
self._config_space = tunable_groups_to_configspace( | ||
self._tunables, self._seed | ||
) | ||
_LOG.debug("ConfigSpace: %s", self._config_space) | ||
return self._config_space | ||
|
||
|
@@ -204,7 +214,7 @@ def name(self) -> str: | |
return self.__class__.__name__ | ||
|
||
@property | ||
def targets(self) -> Dict[str, Literal['min', 'max']]: | ||
def targets(self) -> Dict[str, Literal["min", "max"]]: | ||
""" | ||
A dictionary of {target: direction} of optimization targets. | ||
""" | ||
|
@@ -221,10 +231,12 @@ def supports_preload(self) -> bool: | |
return True | ||
|
||
@abstractmethod | ||
def bulk_register(self, | ||
configs: Sequence[dict], | ||
scores: Sequence[Optional[Dict[str, TunableValue]]], | ||
status: Optional[Sequence[Status]] = None) -> bool: | ||
def bulk_register( | ||
self, | ||
configs: Sequence[dict], | ||
scores: Sequence[Optional[Dict[str, TunableValue]]], | ||
status: Optional[Sequence[Status]] = None, | ||
) -> bool: | ||
""" | ||
Pre-load the optimizer with the bulk data from previous experiments. | ||
|
||
|
@@ -242,8 +254,12 @@ def bulk_register(self, | |
is_not_empty : bool | ||
True if there is data to register, false otherwise. | ||
""" | ||
_LOG.info("Update the optimizer with: %d configs, %d scores, %d status values", | ||
len(configs or []), len(scores or []), len(status or [])) | ||
_LOG.info( | ||
"Update the optimizer with: %d configs, %d scores, %d status values", | ||
len(configs or []), | ||
len(scores or []), | ||
len(status or []), | ||
) | ||
if len(configs or []) != len(scores or []): | ||
raise ValueError("Numbers of configs and scores do not match.") | ||
if status is not None and len(configs or []) != len(status or []): | ||
|
@@ -272,8 +288,12 @@ def suggest(self) -> TunableGroups: | |
return self._tunables.copy() | ||
|
||
@abstractmethod | ||
def register(self, tunables: TunableGroups, status: Status, | ||
score: Optional[Dict[str, TunableValue]] = None) -> Optional[Dict[str, float]]: | ||
def register( | ||
self, | ||
tunables: TunableGroups, | ||
status: Status, | ||
score: Optional[Dict[str, TunableValue]] = None, | ||
) -> Optional[Dict[str, float]]: | ||
""" | ||
Register the observation for the given configuration. | ||
|
||
|
@@ -294,15 +314,22 @@ def register(self, tunables: TunableGroups, status: Status, | |
Benchmark scores extracted (and possibly transformed) | ||
from the dataframe that's being MINIMIZED. | ||
""" | ||
_LOG.info("Iteration %d :: Register: %s = %s score: %s", | ||
self._iter, tunables, status, score) | ||
_LOG.info( | ||
"Iteration %d :: Register: %s = %s score: %s", | ||
self._iter, | ||
tunables, | ||
status, | ||
score, | ||
) | ||
if status.is_succeeded() == (score is None): # XOR | ||
raise ValueError("Status and score must be consistent.") | ||
return self._get_scores(status, score) | ||
|
||
def _get_scores(self, status: Status, | ||
scores: Optional[Union[Dict[str, TunableValue], Dict[str, float]]] | ||
) -> Optional[Dict[str, float]]: | ||
def _get_scores( | ||
self, | ||
status: Status, | ||
scores: Optional[Union[Dict[str, TunableValue], Dict[str, float]]], | ||
) -> Optional[Dict[str, float]]: | ||
""" | ||
Extract a scalar benchmark score from the dataframe. | ||
Change the sign if we are maximizing. | ||
|
@@ -329,7 +356,7 @@ def _get_scores(self, status: Status, | |
|
||
assert scores is not None | ||
target_metrics: Dict[str, float] = {} | ||
for (opt_target, opt_dir) in self._opt_targets.items(): | ||
for opt_target, opt_dir in self._opt_targets.items(): | ||
val = scores[opt_target] | ||
assert val is not None | ||
target_metrics[opt_target] = float(val) * opt_dir | ||
|
@@ -344,7 +371,9 @@ def not_converged(self) -> bool: | |
return self._iter < self._max_iter | ||
|
||
@abstractmethod | ||
def get_best_observation(self) -> Union[Tuple[Dict[str, float], TunableGroups], Tuple[None, None]]: | ||
def get_best_observation( | ||
self, | ||
) -> Union[Tuple[Dict[str, float], TunableGroups], Tuple[None, None]]: | ||
""" | ||
Get the best observation so far. | ||
|
||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here and everywhere: let's separate cosmetic fixes from functional changes. mixing the two bloats up the diff and makes reviewing PRs much harder. Please keep the essential functionality in this PR and make the diff as small as possible and move all code annotations, style, and docstring improvements to a separate PR. having more PRs is good for your github karma! 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! #744 should do just the style changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, @motus @jsfreischuetz and I already talked about that. #744 is the start of that, but I think will also need some additional work. #746 unearthed some related topics around
pyproject.toml
settings vs.setup.cfg
for usingblack
and howsetup
andbuild
type dependencies are specified (right now we requireconda
for certain ones instead ofpip
, which as we saw, has some issues).Am currently working on improvements for all of those and will send a split out series of PRs soon for:
black
andisort
Makefile
rules,vscode
settings, configs, etc..gitrevisions
list to ignore it from mostgit blame
types of analysis