-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify non-default artifact root for experiment #280
Comments
Hi, sorry I don't have much time right now but I think this answer solves your problem. The key idea is that the configuration is on the server side, so you only have to specify the tracking uri at the project level and it already works out of the box. Feel free to ask if you need more help. |
Hi @sergun, is your problem solved? Can I close the issue? |
I close the issue, feel free to reopen if needed. |
I am facing a problem very similar to the one reported above and it is not really clear if the OP was successful in solving it. After I start the mlflow server with
any runs for the However, if I manually set a different, non-existing experiment name in my In summary: It looks like experiments created by Perhaps this is intended behavior or I am missing some step in my configuration. Any help is greatly appreciated! |
Hi, sorry to hear that you experience issues with the plugin. Can you confirm that the key |
Hi, I think everything else is the default value, except for Also, a belated info: I am using vesion
|
So you must specify the tracking server instead of the null value in below line 👇
Else, kedro-mlflow will use the MLFLOW_TRACKING_URI environment variable if it finds it, or the mlruns folder if it is not specified. |
This might just be a conceptual misunderstanding about how |
I don't exactly understand how you run your kedro run, but if I udnerstand correctly, you do are running locally: mlflow server --default-artifact-root /path/to/artifacts/anywhere When I run it locally, I got the following: <lots of warnings>
INFO:waitress: Serving on http://127.0.0.1:5000 Then you open a new shell, activate your virtual environment conda activate myenv
kedro run --pipeline my_pipeline I don't understand how this shell can be aware of the server you started if you do not specify its uri. With this setup, everything will be logged in the #mlflow.yml
server:
mlflow_tracking_uri: http://127.0.0.1:5000 # or whatever it is and now if you launch Am I missing something here? |
You're right, that was a huge misunderstanding on my part. Everything indeed works correctly after specifying the tracking server. Thank you very much for your fast and clarifying answers. |
Description
Did not find any ability to set artifact root for an experiment used in kedro-mlflow instead of default one defined by --default-artifact-root on the mlflow server
Context
I use mlflow server with some --default-artifact-root pointed to file system but I want to store artifacts of the experiment used by kedro-mlflow in S3.
Possible Implementation
Add corresponding param to experiment key in mlflow.yml
Possible Alternatives
Any quick workarounds also appreciated, like setting environment variables, but I did not find them by myself.
The text was updated successfully, but these errors were encountered: