Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of Deepcave in AutoML Benchmark of Gijsbers et al #201

Open
annawiewer opened this issue Dec 15, 2024 · 2 comments
Open

Integration of Deepcave in AutoML Benchmark of Gijsbers et al #201

annawiewer opened this issue Dec 15, 2024 · 2 comments
Assignees

Comments

@annawiewer
Copy link

Hello, :)

I am currently working on integrating DeepCAVE into the AutoML Benchmark developed by Gijsbers et al. (2024). From my understanding, I need to modify and extend the record class to achieve this. However, I would greatly appreciate any guidance or suggestions on where exactly to integrate these changes within the AutoML Benchmark repository (https://github.com/openml/automlbenchmark).

Thank you very much for your help!

Best regards,
Anna

@mwever
Copy link
Collaborator

mwever commented Dec 16, 2024

Hello Anna,

the first question would be why you would want to integrate DeepCAVE with the AutoML Benchmark. The AutoML Benchmark by Gijsbers et al. is a software framework for running different AutoML systems on a predefined benchmark suite. In AutoML Benchmark there is no common notion / data structure for representing a search space nor for the run history that has been seen over time.

@mwever mwever self-assigned this Dec 16, 2024
@annawiewer
Copy link
Author

annawiewer commented Dec 16, 2024

Thank you for your answer.

My aim is not to compare different AutoML frameworks but to analyze the behavior of a single framework under varying time constraints which is a major requirement in the Automl Benchmark by Gijsbers.

I want to integrate DeepCAVE to study how the search space is explored over time and how the cost (or loss) evolves as the optimization progresses as illustrated in the attached screenshot (Lindauer et al., 2019). My goals:

  • Analyze the optimization trajectory for one framework.
  • Evaluate how much additional runtime impacts performance improvements.
  • Identify under-explored regions of the search space.

This analysis is particularly useful for tasks like:

  • Determining the ideal runtime budget for specific datasets (which are in my case financial datasets and not predefined suite)
  • Understanding whether the optimization is stagnating or still yielding gains with more time.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants