-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent and insubstantial test cases #205
Labels
Comments
Closed
Merged
Max-Bladen
added a commit
that referenced
this issue
May 25, 2022
test: updated the test cases for the following functions: circosPlot, block.splsda, network, pca, perf.diablo, perf.mint.splsda, plotIndiv, plotLoadings, plotVar, predict. Using these to estimate the increase in coverage once this overhaul is complete
Max-Bladen
added a commit
that referenced
this issue
May 25, 2022
fix: temporarily uncommented out test cases that were failing. Unknown source of error
Max-Bladen
added a commit
that referenced
this issue
Jun 23, 2022
test: added test cases for `tune.block.splsda()`. Additionally, adjusted `MCV.block.splsda()` so that `n` and `repeat.measures` are calculated at the correct point
Max-Bladen
added a commit
that referenced
this issue
Jun 27, 2022
test: added test cases for `tune.mint.splsda()`.
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
fix: adjusted functionality of `.minimal_train_test_subset` to allow for greater user control and consistency. adjusted test cases which utilised these accordingly
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
refactor: changed notation slightly. "edge cases" are now just referred to as "warnings" as that is all they were used for
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
tests: added new tests for `tune.splsda()`
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
tests: added `set.seed` to multilevel `tune.splsda()` test to ensure consistency
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
fix: temporarily removed a couple tests causing conflict in `test-plotLoadings.R`
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
fix: attempting to revert `test-plotLoadings.R` back to form seen in master to see if this resolves conflict
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
fix: commented out tune.splsda multilevel test and repaired error causing plotVar tests to fail. also renamed the `quiet` function to `.quiet` to bring in line with naming structure of helper functions
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
tests: implemented way to condense the ground truth files by combining any Testable.Components or Ground.Truths which are identical. implemented for `tune.splsda` as trial
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
tests: readded tests for `plotLoadings()`
Max-Bladen
added a commit
that referenced
this issue
Jun 28, 2022
fix: temporarily removed the entire `test-plotLoadings.R` file to allow for rebasing of branch
Max-Bladen
added a commit
that referenced
this issue
Jun 29, 2022
fix: returned `test-plotLoadings.R`
Max-Bladen
added a commit
that referenced
this issue
Jun 29, 2022
tests: condensed existing `auroc` tests and expanded on its coverage.
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the issue
With the current set of test cases, there is a high degree of inconsistency in how numerical and graphical methods are assessed. I would assume there are errors/issues that we are not aware of due to the way in which our tests operate.
Expected behavior
Ideally, we would have a folder in the repo containing
.RData
files which contain full dataframes/lists to use as the ground truth values for our tests. Also, in a given test, we should evaluate all output components (or as many as possible). Incorporating every example into the tests will also be useful (note how many examples were failing when trying to complete PR #204)We can also explore ways in which to evaluate the accuracy of a plot, but I need to brainstorm this one.
The text was updated successfully, but these errors were encountered: