Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify non-default artifact root for experiment #280

Closed
sergun opened this issue Feb 2, 2022 · 10 comments
Closed

Specify non-default artifact root for experiment #280

sergun opened this issue Feb 2, 2022 · 10 comments
Assignees

Comments

@sergun
Copy link

sergun commented Feb 2, 2022

Description

Did not find any ability to set artifact root for an experiment used in kedro-mlflow instead of default one defined by --default-artifact-root on the mlflow server

Context

I use mlflow server with some --default-artifact-root pointed to file system but I want to store artifacts of the experiment used by kedro-mlflow in S3.

Possible Implementation

Add corresponding param to experiment key in mlflow.yml

Possible Alternatives

Any quick workarounds also appreciated, like setting environment variables, but I did not find them by myself.

@Galileo-Galilei
Copy link
Owner

Hi, sorry I don't have much time right now but I think this answer solves your problem. The key idea is that the configuration is on the server side, so you only have to specify the tracking uri at the project level and it already works out of the box.

Feel free to ask if you need more help.

@Galileo-Galilei
Copy link
Owner

Hi @sergun, is your problem solved? Can I close the issue?

@Galileo-Galilei
Copy link
Owner

I close the issue, feel free to reopen if needed.

@yanMHG
Copy link

yanMHG commented Jan 13, 2024

I am facing a problem very similar to the one reported above and it is not really clear if the OP was successful in solving it.

After I start the mlflow server with

mlflow server --default-artifact-root <path-to-azure-blob-storage>

any runs for the Default mlflow experiment (Experiment ID: 0) correctly use the Blob Storage to store its artifacts. This can be done, for example, by manually setting the experiment name to Default in the mlflow.yml file and running one of my pipelines.

However, if I manually set a different, non-existing experiment name in my mlflow.yml file and run a pipeline, its artifacts are not stored in the Blob Storage, but rather on my local filesystem (more specifically, in the ./mlruns folder). The same happens for runs under the kedro-mlflow default experiment name (which is the name of the Kedro project, at least in my case, and also in general, I think).

In summary: It looks like experiments created by kedro-mlflow ignore the --default-artifact-root setting, while those created directly by mlflow respect it.

Perhaps this is intended behavior or I am missing some step in my configuration. Any help is greatly appreciated!

@Galileo-Galilei
Copy link
Owner

Hi, sorry to hear that you experience issues with the plugin. Can you confirm that the key tracking.server.mlflow_tracking_uri is properly set to your mlflow server in your mlflow.yml? Can you share your mlflow.yml so I can try to reproduce the error?

@yanMHG
Copy link

yanMHG commented Jan 15, 2024

Hi, I think everything else is the default value, except for server.credentials, which is set to the entry containing the access info to the blob storage in my credentials file, and tracking.experiment.name, which is set to test_experiment just for testing purposes.

Also, a belated info: I am using vesion 0.11.10 of the package.

# SERVER CONFIGURATION -------------------

# `mlflow_tracking_uri` is the path where the runs will be recorded.
# For more informations, see https://www.mlflow.org/docs/latest/tracking.html#where-runs-are-recorded
# kedro-mlflow accepts relative path from the project root.
# For instance, default `mlruns` will create a mlruns folder
# at the root of the project

# All credentials needed for mlflow must be stored in credentials .yml as a dict
# they will be exported as environment variable
# If you want to set some credentials,  e.g. AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
# > in `credentials.yml`:
# your_mlflow_credentials:
#   AWS_ACCESS_KEY_ID: 132456
#   AWS_SECRET_ACCESS_KEY: 132456
# > in this file `mlflow.yml`:
# credentials: mlflow_credentials

server:
  mlflow_tracking_uri: null # if null, will use mlflow.get_tracking_uri() as a default
  mlflow_registry_uri: null # if null, mlflow_tracking_uri will be used as mlflow default
  credentials: blob_storage
  request_header_provider: # this is only useful to deal with expiring token, see https://github.com/Galileo-Galilei/kedro-mlflow/issues/357
    type: null # The path to a class : my_project.pipelines.module.MyClass. Should inherit from https://github.com/mlflow/mlflow/blob/master/mlflow/tracking/request_header/abstract_request_header_provider.py#L4
    pass_context: False # should the class be instantiated with "kedro_context" argument?
    init_kwargs: {} # any kwargs to pass to the class when it is instantiated

tracking:
  # You can specify a list of pipeline names for which tracking will be disabled
  # Running "kedro run --pipeline=<pipeline_name>" will not log parameters
  # in a new mlflow run

  disable_tracking:
    pipelines: []

  experiment:
    name: test_experiment
    restore_if_deleted: True  # if the experiment`name` was previously deleted experiment, should we restore it?

  run:
    id: null # if `id` is None, a new run will be created
    name: null # if `name` is None, pipeline name will be used for the run name
    nested: True  # if `nested` is False, you won't be able to launch sub-runs inside your nodes

  params:
    dict_params:
      flatten: False  # if True, parameter which are dictionary will be splitted in multiple parameters when logged in mlflow, one for each key.
      recursive: True  # Should the dictionary flattening be applied recursively (i.e for nested dictionaries)? Not use if `flatten_dict_params` is False.
      sep: "." # In case of recursive flattening, what separator should be used between the keys? E.g. {hyperaparam1: {p1:1, p2:2}} will be logged as hyperaparam1.p1 and hyperaparam1.p2 in mlflow.
    long_params_strategy: fail # One of ["fail", "tag", "truncate" ] If a parameter is above mlflow limit (currently 250), what should kedro-mlflow do? -> fail, set as a tag instead of a parameter, or truncate it to its 250 first letters?


# UI-RELATED PARAMETERS -----------------

ui:
  port: "5000" # the port to use for the ui. Use mlflow default with 5000.
  host: "127.0.0.1"  # the host to use for the ui. Use mlflow efault of "127.0.0.1".

@Galileo-Galilei
Copy link
Owner

So you must specify the tracking server instead of the null value in below line 👇

server: mlflow_tracking_uri: null

Else, kedro-mlflow will use the MLFLOW_TRACKING_URI environment variable if it finds it, or the mlruns folder if it is not specified.

@yanMHG
Copy link

yanMHG commented Jan 16, 2024

This might just be a conceptual misunderstanding about how mlflow works on my part (and I apologize in advance if this is the case), but shouldn't these settings work anyway? I am fine with server: mlflow_tracking_uri: null setting the tracking server uri to the mlruns folder, as long as the artifacts are stored in the blob storage. Other stuff (parameters, metrics, etc.) can be stored locally in the mlruns folder.

@Galileo-Galilei
Copy link
Owner

Galileo-Galilei commented Jan 16, 2024

I don't exactly understand how you run your kedro run, but if I udnerstand correctly, you do are running locally:

mlflow server --default-artifact-root /path/to/artifacts/anywhere

When I run it locally, I got the following:

<lots of warnings>
INFO:waitress: Serving on http://127.0.0.1:5000

Then you open a new shell, activate your virtual environment

conda activate myenv
kedro run --pipeline my_pipeline

I don't understand how this shell can be aware of the server you started if you do not specify its uri. With this setup, everything will be logged in the mlruns folder which is the default. You need to add:

#mlflow.yml
server:
  mlflow_tracking_uri: http://127.0.0.1:5000 # or whatever it is

and now if you launch kedro run it will automatically track everything under the server you set up.

Am I missing something here?

@yanMHG
Copy link

yanMHG commented Jan 17, 2024

You're right, that was a huge misunderstanding on my part. Everything indeed works correctly after specifying the tracking server. Thank you very much for your fast and clarifying answers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: ✅ Done
Development

No branches or pull requests

3 participants