-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expanded Test Suite Templated Input Models #251
Conversation
I just pushed two new tests - "test_1d_gradient" and "test_2d_gradient" - which each utilize the homogeneous infinite medium template problem, |
This PR now runs the test suite twice for 1 and 4 OpenMP threads. The number of threads used by the |
Alright, I added a bunch of new tests to this and think that I'm about ready to call it quits with this PR (save for any changes related to reviewer comments). Here is a summary of what I've added:
|
…ions to eliminate redundant builds
…R until simulation time
After merging in #242 this PR failed for fixed source calculations. In order to fix this, I refactored the assignment of fixed sources by |
else: | ||
cmfd = locals()['args'][0] | ||
|
||
cmfd.thisown = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
= 0
-> = False
to be consistent throughout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this still needs to be addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, it is covered now.
#add_test('normal-icpc', cc='icpc', num_threads=1) | ||
#add_test('normal-openmp-icpc', cc='icpc', num_threads=4) | ||
#add_test('normal-clang', cc='clang', num_threads=1) | ||
#add_test('normal-openmp-clang', cc='clang', num_threads=4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason these lines are commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, at the moment it is not easy to install Intel compilers on Travis but this may change soon (see this discussion). The version of clang
installed by apt-get
on Ubuntu 12.04 is not new enough to support some of the C++11 features we need. However, if you want to run the test suite with icpc
or clang
on a personal machine or nsecluster, all you have to do is uncomment the appropriate lines here. So the reason for these lines is simply to show them as an example.
I will say that the test suite passes on my machine for clang
, but fails for some tests with icpc
on nsecluster. It appears that a few fluxes for some tests vary in the 6th decimal place in sequential mode, and many more vary when run in parallel, indicating that roundoff error is different with Intel than GNU.
Could the flux results for |
ditto for |
self.spacing = 0.1 | ||
self.num_azim = 4 | ||
self.max_iters = 10 | ||
self.max_iters = 500 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reasoning behind this number?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope - would you like a different value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong preference on any number so lets just keep it at 500.
Is there any rule for when results should be hashed (or possibly abridged)? |
@samuelshaner we should probably determine a standard for when and when not to hash the output. Right now the only results that are hashed are for "test_cmfd_pwr_assembly" (although you seemed to indicate you thought all of the values were included). At this early stage for the test suite I think we should shy away from hashing since it can be instructive to run the suite on different platforms (e.g., different compilers) and identify differences we never tracked before. This is more difficult to do when the results are hashed. What do you think about hashing output strings that are > 1000 lines long? According to my estimates, such a long file would be about ~15 KB to store, but at least provide us an easy way to check results as we flesh out the suite. |
This is now up-to-date. |
Oops, I meant to say |
Ok I'm fine hashing the results for "test_adjoint_simple_lattice", as well as "test_forward_simple_lattice" which is the same length. I'm not sure about an option to abridge results, perhaps that can come with future PRs with bigger more complex test cases. |
Now hashing the results for those two test cases. |
Just a heads up that I accidentally added another commit to this PR to fix a few issues I identified today. I intended to submit these changes as a separate PR, but since I already committed them, perhaps they can just be reviewed here. These changes include the following:
|
My opinion on hashing: we should only hash results we think should match exactly. For instance, if we are convinced that the eigenvalue should be an exact value then we should hash it. But if we think pin powers could be off by 0.001% and still be accurate then maybe we shouldn't hash the results. Another idea is to round the results to the degree to which we believe they should match and then hash that result. For instance, if we determine we want pin powers to match within 0.01% then we could round to the relevant decimal then hash the results. |
@geogunow I think the debate for / against hashing also has to account for 1) whether the data to compare is larger than we'd like to track with Git and 2) whether having the raw data for comparison makes it more transparent and hence easier to debug tests (note that these two opposing forces are driving the debate over hashing tests results for OpenMC). At the moment, the |
std::map< std::pair<Cell*, int>, FP_PRECISION >::iterator cell_iter; | ||
std::map< std::pair<Material*, int>, FP_PRECISION >::iterator mat_iter; | ||
|
||
/** Fixed sources assigned by Cell */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for the double *
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, just a typo!
Is there anything that I've left undone in this PR? I'd like to merge this in before adding further to the test suite. |
I'm happy with this PR. The test suite is starting to look really nice! |
Expanded Test Suite Templated Input Models
@wbinventor Since you cited my comments on using Intel compilers within Travis, which works via https://github.com/nemequ/icc-travis, you might also be interested in the newly released https://github.com/nemequ/pgi-travis . |
This PR builds upon #248 with new
InputSets
for the test suite - in particular, an infinite homogeneous medium with 2-group data and 4x4 simple lattice problem (taken from our sample-inputs). This PR introduces 4 new tests:In addition, the
TestingHarness
class'_get_results(...)
method now makes it easy to add the number of FSRs, tracks, or segments to the database of results for regression tests. The "test_hom_inf_med," "test_pin_cell" and "test_simple_lattice" now all export FSR scalar fluxes to the regression results in addition to the eigenvalue.