Skip to content

Commit

Permalink
Update dataset name from htahir1 to zenml namespace in configuration …
Browse files Browse the repository at this point in the history
…files
  • Loading branch information
strickvl committed Jan 27, 2025
1 parent aecfe12 commit 86f6276
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions llm-finetuning/configs/finetune_aws.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ settings:
model:
name: "peft-lora-zencoder15B-personal-copilot"
description: "Fine-tuned `starcoder15B-personal-copilot-A100-40GB-colab` for ZenML pipelines."
audience: "Data Scientists / ML Engineers"
audience: "Data Scientists / ML Engineers"
use_cases: "Code Generation for ZenML MLOps pipelines."
limitations: "There is no guarantee that this model will work for your use case. Please test it thoroughly before using it in production."
trade_offs: "This model is optimized for ZenML pipelines. It is not optimized for other libraries."
Expand All @@ -23,13 +23,13 @@ steps:
step_operator: sagemaker-eu
settings:
step_operator.sagemaker:
estimator_args:
estimator_args:
instance_type: "ml.p4d.24xlarge"

parameters:
args:
model_path: "bigcode/starcoder"
dataset_name: "htahir1/zenml-codegen-v1"
dataset_name: "zenml/zenml-codegen-v1"
subset: "data"
data_column: "content"
split: "train"
Expand Down
4 changes: 2 additions & 2 deletions llm-finetuning/configs/finetune_gcp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ settings:
model:
name: "peft-lora-zencoder15B-personal-copilot"
description: "Fine-tuned `starcoder15B-personal-copilot-A100-40GB-colab` for ZenML pipelines."
audience: "Data Scientists / ML Engineers"
audience: "Data Scientists / ML Engineers"
use_cases: "Code Generation for ZenML MLOps pipelines."
limitations: "There is no guarantee that this model will work for your use case. Please test it thoroughly before using it in production."
trade_offs: "This model is optimized for ZenML pipelines. It is not optimized for other libraries."
Expand All @@ -29,7 +29,7 @@ steps:
parameters:
args:
model_path: "bigcode/starcoder"
dataset_name: "htahir1/zenml-codegen-v1"
dataset_name: "zenml/zenml-codegen-v1"
subset: "data"
data_column: "content"
split: "train"
Expand Down
2 changes: 1 addition & 1 deletion llm-finetuning/configs/generate_code_dataset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ settings:

# pipeline configuration
parameters:
dataset_id: htahir1/zenml-codegen-v1
dataset_id: zenml/zenml-codegen-v1

steps:
mirror_repositories:
Expand Down

0 comments on commit 86f6276

Please sign in to comment.