-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
large file tests hang at run_diskless2.sh #2315
Comments
When I comment out running run_diskless2.sh, then all tests pass. |
The purpose of that test is to create a large in-memory file of size 500 megabytes. |
I have ~57 GB of available memory:
|
Refresh my memory; will each processor try to allocate the memory or will only |
This is a sequential test so is only running on one processor... |
Well I will suppress this test if running parallel. Hope that will fix the problem. |
## Include <getopt.h> in various utilities re: Unidata#2303 As noted, some utilities are using getopt() without including getopt.h, so add as needed. ## Turn off run_diskless2.sh when ENABLE_PARALLEL is true re: Unidata#2315 Ed notes that this test hangs when running parallel. The test is attempting to create a very large in-memory file, which is the proximate cause. But no idea what's the underlying cause.
Fixed by PR #2316 ? |
I would suggest that while #2316 addresses the fact that the test suite hangs on this configuration, it would be good to leave this issue open since no root cause has been identified. For all we know this could be due to some nasty bug lurking somewhere. Unless I missed something and we'd expect a test running on a single processor should fail in a parallel configuration. |
Agreed that this has an underlying issue that will need to be resolved. |
Since this test is allocating a 500mbyte block of virtual memory, my suspicion |
I don't think so. This is not a parallel I/O problem - it hangs in sequential mode too. @DennisHeimbigner does this test work for you on your machine? |
Even if it was running it parallel, I would be completely and utterly shocked if that necessitated an actual copy rather than being handled by virtual addressing. And even if it did copy, with today's memory bandwidth, that should take less than a second. |
I pass. I have no other ideas about what might be happening. |
This should be resolved in PR #2319 |
This is fixed. I will close this issue. |
@DennisHeimbigner when trying to run large file tests (i.e. with --enable-large-file-tests) the test run_diskless2.sh hangs. It's been like this for more than an hour.
This is on a powerful multi-core machine with plenty of memory, so if the test can't run on this machine, it's too hard! ;-)
The text was updated successfully, but these errors were encountered: