Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks endpoints to Orion Web API #996

Merged

Conversation

notoraptor
Copy link
Collaborator

Description

Hi @bouthilx ! I don't know where's the best place to report this work, so I made a PR here.

This PR adds new Orion Web API entries to get benchmarks.

The PR is based on your own branch ( https://github.com/bouthilx/orion/tree/feature/benchmark_webapi ) rebased with develop branch and extended with few commits to fix issues.

Changes

  • Add new entries to Orion Web API:
    • /benchmarks: get available benchmarks
    • /benchmarks/:name: get given benchmark
  • Replace parameter task_num with repetitions

Checklist

Tests

  • I added corresponding tests for bug fixes and new features. If possible, the tests fail without the changes
  • All new and existing tests are passing ($ tox -e py38; replace 38 by your Python version if necessary)

Documentation

  • I have updated the relevant documentation related to my changes

Quality

  • I have read the CONTRIBUTING doc
  • My commits messages follow this format
  • My code follows the style guidelines ($ tox -e lint)

@notoraptor notoraptor changed the title (WIP) Add benchmarks endpoints to Orion Web API Add benchmarks endpoints to Orion Web API Sep 21, 2022
@notoraptor notoraptor force-pushed the feature/benchmark_webapi_rebased branch from 2381e1a to cd0617c Compare September 26, 2022 18:39
@notoraptor
Copy link
Collaborator Author

notoraptor commented Sep 26, 2022

Hi @bouthilx ! I added a supplementary commit about tests: notoraptor@cd0617c

There was a TODO about adding test bad task, bad assessment and bad algorithm (no algorithm). But there was already a test for bad task, a test for bad assessment and a test for bad algorithm (see tests just after the TODO). So, I just added a test for bad algorithm - no algorithm, and I removed the TODO comments.

There is one remaining TODO from your code here, but I don't know if it must be resolved in this PR: https://github.com/notoraptor/orion/blob/feature/benchmark_webapi_rebased/src/orion/benchmark/__init__.py#L105

resp.body = json.dumps(response)


def _find_latest_versions(experiments):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the functions below where copy pasted from experiments_resource but are not used. Sorry, that was me. Could you please remove them? 😬

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done !

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I think the other functions are not used as well.

@notoraptor
Copy link
Collaborator Author

TODO: Check coverage

@notoraptor notoraptor force-pushed the feature/benchmark_webapi_rebased branch from 11b8bad to f8e850f Compare November 24, 2022 19:24
bouthilx and others added 24 commits November 30, 2022 14:48
Why:

Building the benchmark object (setup_experiments in particular) can be
quite expensive (>60s). When we only want to execute one particular
analysis, the build does a lot of useless fetch on the storage. The
experiments should be built only when they are needed.
Relatedly, the analysis method should allow selecting specific study.

How:

Only call setup_experiments on study when we try to access
`study.get_experiments()`. The task_ids which seemed useless are removed
from `experiments_info` as well.
Why:

The interface is bloated and difficult to memorise. One of the reason we
decided to go this way is that we previously needed to instantiate an
algo to access its attributes. We can now use algo_factory.get_class()
to have the class object only and access the class attributes. Thanks to
this we can now simply set deterministic as a class attribute for the
algorithms and it simplifies the interface of the benchmark.

Algorithms have deterministic set to False by default since they
generally have sources of variations.
- Add gunicorn settings from YAML to Gunicon app
- increase gunicorn timeout to 300 seconds
… in with-statement, to make sure lock.acquire and lock.release are called
@notoraptor notoraptor force-pushed the feature/benchmark_webapi_rebased branch from 0f006ce to 71dae0a Compare November 30, 2022 19:53
Copy link
Member

@bouthilx bouthilx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coverage it good, thanks a lot!!

@bouthilx bouthilx merged commit a994595 into Epistimio:develop Dec 1, 2022
@bouthilx bouthilx added the feature Introduces a new feature label Dec 16, 2022
@notoraptor notoraptor deleted the feature/benchmark_webapi_rebased branch February 2, 2023 21:10
@notoraptor notoraptor mentioned this pull request Mar 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Introduces a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants