Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a bit cleaner definition of test config spaces #60

Merged

Conversation

motus
Copy link

@motus motus commented Aug 20, 2024

No description provided.

@@ -73,7 +73,7 @@ def test_mock_optimization_loop(mock_env_no_noise: MockEnv, mock_opt: MockOptimi
"vmSize": "Standard_B2ms",
"idle": "halt",
"kernel_sched_migration_cost_ns": 117026,
"kernel_sched_latency_ns": 149827700,
"kernel_sched_latency_ns": 149827706,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These really should be getting returned on the quantization boundary, no?

Copy link
Author

@motus motus Aug 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proper quantization boundary here should be something like 100000000..

The problem is that neither Mock Optimizer nor FLAML respect quantization (or distributions, for that matter). MockOptimizer just samples uniformly from the Tunables' range and does not use ConfigSpace at all; For FLAML we have a very limited conversion from ConfigSpace to FLAML hyperparameters that just ignores all that info (and I don't know if FLAML has any support for quantization etc. internally)

I still have some faint hope for SMAC3, but I cannot figure out why it does not call ConfigSpace.sample_configuration() like I think it should - maybe it also taps directly into the hyperparameters' ranges etc. and uses ConfigSpace only to validate the proposed configuration

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see now. Might need to implement quantization in the mlos_bench Tunables layer, but that might interact poorly with the underlying optimizer (e.g., register a quantized version of the actual values suggested).

@bpkroth bpkroth merged commit c2d3128 into bpkroth:quantization-hack-cleanup Aug 21, 2024
@motus motus deleted the quantization-hack-cleanup branch August 21, 2024 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants