-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests are too slow #30
Comments
Reducing the sizes of the RBM and input data for those tests doesn't help enough? Also, this is probably related to running the tests with coverage being particularly slow. Ideally, we should be able to run the fast tests with coverage and the benchmarking and acceptance/performance tests without. I've opened in an issue here to make this easier to do, but in the mean time we could probably just use an environment variable to specify groups of tests to run. I'll include some test refactoring with a benchmarking PR which should at least partly address this. |
Tests down to around 30-40 min with coverage and benchmarking here |
I think we can also decompose tests into parts. Though we need to test every kind of visible and hidden units and every option, we don't need to test every combination of units and options. Essentially, kinds of units affect only sampling functions, while options in most cases control some specific functions, independent of sampling and each other. There are other independencies that we can use to drastically reduce test time. (Ironically, exactly the same decomposition of independent parts is what distinguishes restricted Boltzmann machines from fully-connected Boltzmann machines :)) |
Currently tests run more than 2 hours on Travis, which is unreasonably long. The main cause for this is a brute force approach in which we test every supported element type with every possible visible and hidden unit type for both - dense and sparse input - plus different sets of options. The principal question is how to exclude most flows, but keep coverage high. Any suggestions are welcome.
The text was updated successfully, but these errors were encountered: