Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark #126

Closed
gmuraru opened this issue Apr 22, 2021 · 10 comments · Fixed by #128
Closed

Add benchmark #126

gmuraru opened this issue Apr 22, 2021 · 10 comments · Fixed by #128
Labels
feature Add a new functionality

Comments

@gmuraru
Copy link
Member

gmuraru commented Apr 22, 2021

Feature Description

Add a simple convolutional network in the tests where we do a feedforward using VirtualMachines.

Is your feature request related to a problem?

  • Transparency, if someone wants to check the times for doing a forward pass.
  • Have a clear vision on how we impact the running time
@gmuraru gmuraru added the feature Add a new functionality label Apr 22, 2021
@danielorihuela
Copy link
Contributor

danielorihuela commented Apr 24, 2021

I am not very sure what you mean. I guess you want to add tests with a Conv2D in this file, crypto_primitive_provider_test. Am I correct? @gmuraru

@danielorihuela
Copy link
Contributor

Something like this maybe.

@gmuraru
Copy link
Member Author

gmuraru commented Apr 24, 2021

Ahh..we want to add a benchmark action - like this that would trigger a benchmark test (like doing a feedforward through a ConvNet and will report the time on the graph)

@danielorihuela
Copy link
Contributor

Okey, I started working on this. Now I understand the idea. I have done a first of learning and experimentation. I have some questions.

  1. Do we have a github page where to show the graphs? If not, should we create one or store the information in a simple benchmarks.json in this same repository?
  2. Do you want to use that action (with advanced options like fail-threshold) or you only want that we build simple scripts that store the information in a json and builds a graph?

@gmuraru
Copy link
Member Author

gmuraru commented Apr 25, 2021

Also pinging @LaRiffle - he might want to be kept in the loop.

  1. Yes, we want a page. Also, if we have the page, aren't the results also stored locally? (if not, we can also store them in a benchmark.json -- this should be in a benchmark folder)
  2. For the moment, I think it would be ok only to have a feeling on what we have: keeping it only for displaying data and how the benchmarks evolve with each PR - at one point we might also add the fail.

@danielorihuela
Copy link
Contributor

danielorihuela commented Apr 25, 2021

Yeah, the results are always stored locally sorry. I have never done a Github Page and I though that it required a different repository. Okey, then I guess that someone will need to create a Github Page. Anyway, with that information and my fork I think I am in good shape to start working. Thanks!

@gmuraru
Copy link
Member Author

gmuraru commented Apr 25, 2021

I think it would be automatically corrected. @aanurraj, on syft_0.2.x did we need to do anything more (like create the page manually?)

Should I assign you the issue?

@danielorihuela
Copy link
Contributor

Yeah, assign it to me, why not. I am already working on it. For what I am reading, I think the repository needs a GithubPage (in case you want to be able to visualize it) and a secret token to be able to do a commit with the new benchmarks. https://github.com/rhysd/github-action-benchmark#charts-on-github-pages-1. But the data will be created in the branch gh-pages, inside the dev/bench folder (by default). We can start with that, and if you want we change it.

@aanurraj
Copy link
Contributor

Yeah, assign it to me, why not. I am already working on it. For what I am reading, I think the repository needs a GithubPage (in case you want to be able to visualize it) and a secret token to be able to do a commit with the new benchmarks. https://github.com/rhysd/github-action-benchmark#charts-on-github-pages-1. But the data will be created in the branch gh-pages, inside the dev/bench folder (by default). We can start with that, and if you want we change it.

You don’t need any secret token!

@danielorihuela
Copy link
Contributor

True, sorry. Seems to be fixed benchmark-action/github-action-benchmark#1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Add a new functionality
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants