-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-6965 Test: Increased the SCM size as recent layout change need more space with current dataset. #4928
Conversation
…ore space. Test is failing because of SCM size target size is getting low 9MB with dataset. This test has no requirement to use low size so increase enough so test should not fail with ENOSPACE. Updated test to use PCMEM instead of using tmpfs. Test-tag-hw-large: nvme_object_multiple_pools Signed-off-by: Samir Raval <samir.raval@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
Test-tag-hw-large: nvme_object_multiple_pools Signed-off-by: Samir Raval <samir.raval@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
FYI: Errors found in lines not modified in the patch:
src/tests/ftest/nvme/nvme_object.py:147:
(pylint-super-with-arguments) Consider using Python 3 style super() without arguments
c0a8414
to
34e33db
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
Test stage Functional_Hardware_Medium completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-4928/5/execution/node/1355/log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No errors found by checkpatch.
@@ -194,6 +195,7 @@ def test_nvme_object_multiple_pools(self): | |||
threads = [] | |||
index = 0 | |||
for size in self.pool_size[:-1]: | |||
time.sleep(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure 1 second is enough? Why we added this 1 second delay? Hope we will not land in some intermittent failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added so threads have some time in between before it begins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the nvme test result in Jenkins in build #10. Was the test run?
Yes it was run in #4 build and the result can be found from https://build.hpdd.intel.com/job/daos-stack/job/daos/job/PR-4928/4/artifact/Functional_Hardware_Large/nvme/nvme_object.py/job.log |
…ore space with current dataset. (#4928) Test is failing because of SCM size target size is getting low 9MB with dataset. This test has no requirement to use low size so increase enough so test should not fail with ENOSPACE. Updated test to use PCMEM instead of using tmpfs. Quick-build: true Skip-unit-tests: true Skip-unit-test-memcheck: true Test-tag: nvme_object_multiple_pools Signed-off-by: Samir Raval <samir.raval@intel.com>
…ore space with current dataset. (#4928) (#5228) Test is failing because of SCM size target size is getting low 9MB with dataset. This test has no requirement to use low size so increase enough so test should not fail with ENOSPACE. Updated test to use PCMEM instead of using tmpfs. Signed-off-by: Samir Raval <samir.raval@intel.com>
The test is failing because of SCM size target size is getting low 9MB with the dataset.
This test has no requirement to use low size so increase enough so the test should not fail with ENOSPACE.
Updated test to use PCMEM instead of using tmpfs.
Test-tag-hw-large: nvme_object_multiple_pools
Signed-off-by: Samir Raval samir.raval@intel.com