Skip to content

Commit

Permalink
[promptflow][bugfix] Fix breaking test due to LineRun.evaluations c…
Browse files Browse the repository at this point in the history
…ontract change (#2177)

# Description

For `LineRun.evaluations`, it will be a dict for now, update the
corresponding test; need to improve coverage on this.

# All Promptflow Contribution checklist:
- [x] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [x] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [x] Title of the pull request is clear and informative.
- [x] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [x] Pull request includes test coverage for the included changes.
  • Loading branch information
zhengfeiwang committed Mar 1, 2024
1 parent 0305e59 commit 0861876
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/promptflow/tests/sdk_cli_test/e2etests/test_experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def test_experiment_start(self):
assert len(line_runs) == 3
line_run = line_runs[0]
assert len(line_run.evaluations) == 1, "line run evaluation not exists!"
assert "eval_classification_accuracy" == line_run.evaluations[0].display_name
assert "eval_classification_accuracy" == list(line_run.evaluations.values())[0].display_name

# Test experiment restart
exp = client._experiments.start(exp.name)
Expand Down Expand Up @@ -255,7 +255,7 @@ def _assert_result(result):
assert len(line_runs) == 1
line_run = line_runs[0]
assert len(line_run.evaluations) == 1, "line run evaluation not exists!"
assert "eval_classification_accuracy" == line_run.evaluations[0].display_name
assert "eval_classification_accuracy" == list(line_run.evaluations.values())[0].display_name
# Test with default data and custom path
expected_output_path = Path(tempfile.gettempdir()) / ".promptflow/my_custom"
result = client.flows.test(target_flow_path, experiment=template_path, output_path=expected_output_path)
Expand Down

0 comments on commit 0861876

Please sign in to comment.