You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a side note to your "feature"'s idea, I have no doubts you know Mihaela from her Van Der Schaar Lab. Mihaela, very recently, shared a post on LinkedIn about a new LLM developed in their lab. This LLM is designed for AutoML Health projects, allowing users (in their case scenario, health practitioners) to apply AutoML to their data without writing a main.py script. The model intelligently sets up all necessary parameters based on the user's request (Data, Metric, etc.) and much more !
Given the extensive resources available @ OpenML, including a variety of datasets and metrics, considering a similar methodology is an intriguing proposition. Using a LLM in conjunction with GAMA as the primary AutoML engine could be a significant advancement. The crux of this approach would be to see if this system could effectively generate a main.py script tailored to a user's specific needs. This would entail integrating user-supplied data, which could come from OpenML's own datasets or from elsewhere, as well as preferred metrics and other critical parameters. Integration of these elements with the capabilities of the LLM and GAMA may not only streamline the process but also result in significant improvements in project management by any users of OpenML ressources. This concept, I believe, holds great promise for improving the utility and efficiency of OpenML-related tools!
This might not be directly relevant given that this is old of 2019 ! but I thought to share this, to show this is already done somewhere else and could be useful for your work @ OpenML. Hope this is helpful!
One of the main benefits of the issue as I intended it here, was to also be able to upload information about the internal optimization GAMA performs to OpenML. As such, this discussion is largely unrelated to the topic.
That said, for the kind of system you proposed I would prefer a separate package, I think. It would be much easier to manage and doesn't necessarily need a tight integration with GAMA's code-base (it just needs to understand the public interface).
Automatically run GAMA on OpenML tasks, by adding an optional dependency on openml api. Specifics need to be decided on, e.g.:
The text was updated successfully, but these errors were encountered: