Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expanded Test Suite Templated Input Models #251

Merged
merged 26 commits into from
Jan 29, 2016

Conversation

wbinventor
Copy link
Member

This PR builds upon #248 with new InputSets for the test suite - in particular, an infinite homogeneous medium with 2-group data and 4x4 simple lattice problem (taken from our sample-inputs). This PR introduces 4 new tests:

  • test_rings
  • test_hom_inf_medium
  • test_simple_lattice

In addition, the TestingHarness class' _get_results(...) method now makes it easy to add the number of FSRs, tracks, or segments to the database of results for regression tests. The "test_hom_inf_med," "test_pin_cell" and "test_simple_lattice" now all export FSR scalar fluxes to the regression results in addition to the eigenvalue.

@wbinventor
Copy link
Member Author

I just pushed two new tests - "test_1d_gradient" and "test_2d_gradient" - which each utilize the homogeneous infinite medium template problem,

@wbinventor
Copy link
Member Author

This PR now runs the test suite twice for 1 and 4 OpenMP threads. The number of threads used by the TrackGenerator is always only 1 in order to ensure reproducible FSRs. But the number of threads used by the Solver is varied. Also, this PR now includes a README for the test suite (adapted from that for OpenMC).

@wbinventor
Copy link
Member Author

The latest commit adds "clang" as a secondary compiler to test along with "gcc." The Intel compilers are not yet supported on Travis, but it appears that may change based on the discussions here and here.

@wbinventor
Copy link
Member Author

Alright, I added a bunch of new tests to this and think that I'm about ready to call it quits with this PR (save for any changes related to reviewer comments). Here is a summary of what I've added:

  • Fixed source tests for the Solver::computeFlux(...) and Solver::computeSource(...) methods
  • Tests for both FORWARD and for the ADJOINT calculation modes
  • A templated PwrAssemblyInput based on the 17x17 assembly in "sample-input/benchmarks"
  • A test for CMFD with the PwrAssemblyInput
  • Updated "sample-input/ipython-notebook" to relect changes from recent PRs
  • Added Geometry::setCmfd(...) to "thisown.i" so that Cmfd will not be garbage collected by Python
  • Removed "clang" tests from Travis, but a user can easily toggle between compilers if they wish

@wbinventor
Copy link
Member Author

After merging in #242 this PR failed for fixed source calculations. In order to fix this, I refactored the assignment of fixed sources by Cell and Material such that they are assigned at simulation runtime. In other words, a user may define a fixed source by Material or Cell with Solver::setFixedSourceByMaterial(...) or Solver::setFixedSourceByCell(...) at any point (even before tracks have been generated and FSRs assigned). Only when Solver::computeSource(...), Solver::computeFlux(...) or Solver::computeEigenvalue(...) is called are the sources by Material and/or Cell assigned to the appropriate FSRs in the array of fixed sources indexed by FSR and energy group.

else:
cmfd = locals()['args'][0]

cmfd.thisown = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

= 0 -> = False to be consistent throughout

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this still needs to be addressed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, it is covered now.

#add_test('normal-icpc', cc='icpc', num_threads=1)
#add_test('normal-openmp-icpc', cc='icpc', num_threads=4)
#add_test('normal-clang', cc='clang', num_threads=1)
#add_test('normal-openmp-clang', cc='clang', num_threads=4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason these lines are commented out?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, at the moment it is not easy to install Intel compilers on Travis but this may change soon (see this discussion). The version of clang installed by apt-get on Ubuntu 12.04 is not new enough to support some of the C++11 features we need. However, if you want to run the test suite with icpc or clang on a personal machine or nsecluster, all you have to do is uncomment the appropriate lines here. So the reason for these lines is simply to show them as an example.

I will say that the test suite passes on my machine for clang, but fails for some tests with icpc on nsecluster. It appears that a few fluxes for some tests vary in the 6th decimal place in sequential mode, and many more vary when run in parallel, indicating that roundoff error is different with Intel than GNU.

@samuelshaner
Copy link
Contributor

Could the flux results for tests/test_cmfd_pwr_assembly/results_true.dat just be hashed? There's quite a few values.

@samuelshaner
Copy link
Contributor

ditto for tests/test_compute_flux/results_true.dat

self.spacing = 0.1
self.num_azim = 4
self.max_iters = 10
self.max_iters = 500
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reasoning behind this number?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope - would you like a different value?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong preference on any number so lets just keep it at 500.

@samuelshaner
Copy link
Contributor

Is there any rule for when results should be hashed (or possibly abridged)?

@wbinventor
Copy link
Member Author

@samuelshaner we should probably determine a standard for when and when not to hash the output. Right now the only results that are hashed are for "test_cmfd_pwr_assembly" (although you seemed to indicate you thought all of the values were included). At this early stage for the test suite I think we should shy away from hashing since it can be instructive to run the suite on different platforms (e.g., different compilers) and identify differences we never tracked before. This is more difficult to do when the results are hashed.

What do you think about hashing output strings that are > 1000 lines long? According to my estimates, such a long file would be about ~15 KB to store, but at least provide us an easy way to check results as we flesh out the suite.

@wbinventor
Copy link
Member Author

This is now up-to-date.

@samuelshaner
Copy link
Contributor

Oops, I meant to say tests/test_adjoint_simple_lattice/results_true.dat, which is 3,587 lines long. I think hashing or abridging results that are > 1000 lines is a good idea. What do you think about abridging instead of hashing, again because hashing doesn't give any info about how different the numbers are.

@wbinventor
Copy link
Member Author

Ok I'm fine hashing the results for "test_adjoint_simple_lattice", as well as "test_forward_simple_lattice" which is the same length. I'm not sure about an option to abridge results, perhaps that can come with future PRs with bigger more complex test cases.

@wbinventor
Copy link
Member Author

Now hashing the results for those two test cases.

@wbinventor
Copy link
Member Author

Just a heads up that I accidentally added another commit to this PR to fix a few issues I identified today. I intended to submit these changes as a separate PR, but since I already committed them, perhaps they can just be reviewed here. These changes include the following:

  • Removed Cell fills from the Geometry::toString() method to allow track files to be imported for geometries with different MGXS libraries / Materials
  • Made neighbor cell optimizations for ray tracing optional. I've found the algorithm to build neighbor cells to be extremely slow for geometries processed with OpenCG's region differentiation algorithm and which may have O(10,000) Surfaces. Improving the performance for this will have to wait for another day.

@geogunow
Copy link
Contributor

My opinion on hashing: we should only hash results we think should match exactly. For instance, if we are convinced that the eigenvalue should be an exact value then we should hash it. But if we think pin powers could be off by 0.001% and still be accurate then maybe we shouldn't hash the results. Another idea is to round the results to the degree to which we believe they should match and then hash that result. For instance, if we determine we want pin powers to match within 0.01% then we could round to the relevant decimal then hash the results.

@wbinventor
Copy link
Member Author

@geogunow I think the debate for / against hashing also has to account for 1) whether the data to compare is larger than we'd like to track with Git and 2) whether having the raw data for comparison makes it more transparent and hence easier to debug tests (note that these two opposing forces are driving the debate over hashing tests results for OpenMC). At the moment, the TestingHarness truncates keff and the FSR fluxes to 6 decimal places. This is not to say that this is the best value, but provides us with a starting point for our testing infrastructure. The pin powers / FSR fission rates isn't yet even an option for the testing results but can be added as we build upon this.

std::map< std::pair<Cell*, int>, FP_PRECISION >::iterator cell_iter;
std::map< std::pair<Material*, int>, FP_PRECISION >::iterator mat_iter;

/** Fixed sources assigned by Cell */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for the double *?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, just a typo!

@wbinventor
Copy link
Member Author

Is there anything that I've left undone in this PR? I'd like to merge this in before adding further to the test suite.

@samuelshaner
Copy link
Contributor

I'm happy with this PR. The test suite is starting to look really nice!

samuelshaner added a commit that referenced this pull request Jan 29, 2016
Expanded Test Suite Templated Input Models
@samuelshaner samuelshaner merged commit 3f35937 into mit-crpg:develop Jan 29, 2016
@wbinventor wbinventor deleted the input-sets branch January 29, 2016 21:51
@jeffhammond
Copy link

@wbinventor Since you cited my comments on using Intel compilers within Travis, which works via https://github.com/nemequ/icc-travis, you might also be interested in the newly released https://github.com/nemequ/pgi-travis .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants