Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing of plan execution, diff generation, and model reporting #150

Open
1 of 6 tasks
cereallarceny opened this issue Jul 7, 2020 · 0 comments
Open
1 of 6 tasks
Assignees
Labels
Priority: 3 - Medium 😒 Should be fixed soon, but there may be other pressing matters that come first Severity: 1 - Critical 🔥 Causes a failure of the complete software system, subsystem or a program within the system Status: Available 👋 Available for assignment, who wants it? Status: Blocked ✖️ Cannot work on this because of some other incomplete work Type: Testing 🧪 Add testing or improving existing testing of a file, feature, or codebase
Milestone

Comments

@cereallarceny
Copy link
Member

Description

We want to test three core pieces of functionality within our codebase after migrating to using PyTorch's iOS wrapper classes: plan execution, diff generation, and model reporting. We are not testing them at this time because the classes we've written for plans, tensors, and such are unmaintainable with the types of PySyft plans we'd like to be executing. In order to achieve such functionality, we'll be copying code from an unmerged PyTorch PR: pytorch/pytorch#25541. After that task is done, we'll want to test this functionality once and for all!

Type of Test

  • Unit test (e.g. checking a loop, method, or function is working as intended)
  • Integration test (e.g. checking if a certain group or set of functionality is working as intended)
  • Regression test (e.g. checking if by adding or removing a module of code allows other systems to continue to function as intended)
  • Stress test (e.g. checking to see how well a system performs under various situations, including heavy usage)
  • Performance test (e.g. checking to see how efficient a system is as performing the intended task)
  • Other...

Expected Behavior

We should have completely passing tests for plan execution, diff generation, and model reporting.

Additional Context

pytorch/pytorch#25541

@cereallarceny cereallarceny added Status: Blocked ✖️ Cannot work on this because of some other incomplete work Priority: 3 - Medium 😒 Should be fixed soon, but there may be other pressing matters that come first Severity: 1 - Critical 🔥 Causes a failure of the complete software system, subsystem or a program within the system Status: Available 👋 Available for assignment, who wants it? Type: Testing 🧪 Add testing or improving existing testing of a file, feature, or codebase labels Jul 7, 2020
@cereallarceny cereallarceny added this to the 0.2.0 milestone Jul 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: 3 - Medium 😒 Should be fixed soon, but there may be other pressing matters that come first Severity: 1 - Critical 🔥 Causes a failure of the complete software system, subsystem or a program within the system Status: Available 👋 Available for assignment, who wants it? Status: Blocked ✖️ Cannot work on this because of some other incomplete work Type: Testing 🧪 Add testing or improving existing testing of a file, feature, or codebase
Projects
None yet
Development

No branches or pull requests

2 participants