Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-6965 Test: Increased the SCM size as recent layout change need more space with current dataset. #4928

Merged
merged 5 commits into from
Mar 29, 2021

Conversation

ravalsam
Copy link
Contributor

@ravalsam ravalsam commented Mar 9, 2021

The test is failing because of SCM size target size is getting low 9MB with the dataset.
This test has no requirement to use low size so increase enough so the test should not fail with ENOSPACE.
Updated test to use PCMEM instead of using tmpfs.

Test-tag-hw-large: nvme_object_multiple_pools

Signed-off-by: Samir Raval samir.raval@intel.com

…ore space.

Test is failing because of SCM size target size is getting low 9MB with dataset.
This test has no requirement to use low size so increase enough so test
should not fail with ENOSPACE.
Updated test to use PCMEM instead of using tmpfs.

Test-tag-hw-large: nvme_object_multiple_pools

Signed-off-by: Samir Raval <samir.raval@intel.com>
Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

Test-tag-hw-large: nvme_object_multiple_pools

Signed-off-by: Samir Raval <samir.raval@intel.com>
Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

FYI: Errors found in lines not modified in the patch:

src/tests/ftest/nvme/nvme_object.py:147:
(pylint-super-with-arguments) Consider using Python 3 style super() without arguments

@ravalsam ravalsam force-pushed the samirrav/DAOS-6965 branch from c0a8414 to 34e33db Compare March 19, 2021 19:19
Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

@daosbuild1
Copy link
Collaborator

Test stage Functional_Hardware_Medium completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-4928/5/execution/node/1355/log

Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

Copy link
Collaborator

@daosbuild1 daosbuild1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. No errors found by checkpatch.

@ravalsam
Copy link
Contributor Author

@dinghwah dinghwah requested review from dinghwah and removed request for dinghwah March 24, 2021 16:08
@@ -194,6 +195,7 @@ def test_nvme_object_multiple_pools(self):
threads = []
index = 0
for size in self.pool_size[:-1]:
time.sleep(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure 1 second is enough? Why we added this 1 second delay? Hope we will not land in some intermittent failure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added so threads have some time in between before it begins.

Copy link
Contributor

@saurabhtandan saurabhtandan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@dinghwah dinghwah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ravalsam ravalsam requested a review from rpadma2 March 24, 2021 16:26
@ravalsam ravalsam requested a review from a team March 24, 2021 16:55
Copy link
Contributor

@sylviachanoiyee sylviachanoiyee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the nvme test result in Jenkins in build #10. Was the test run?

@ravalsam
Copy link
Contributor Author

I don't see the nvme test result in Jenkins in build #10. Was the test run?

Yes it was run in #4 build and the result can be found from https://build.hpdd.intel.com/job/daos-stack/job/daos/job/PR-4928/4/artifact/Functional_Hardware_Large/nvme/nvme_object.py/job.log
I rebased a couple of times because of Ci issues so it's wont be present in latest run

@sylviachanoiyee sylviachanoiyee merged commit 236eb63 into master Mar 29, 2021
@sylviachanoiyee sylviachanoiyee deleted the samirrav/DAOS-6965 branch March 29, 2021 18:47
ravalsam added a commit that referenced this pull request Mar 29, 2021
…ore space with current dataset. (#4928)

Test is failing because of SCM size target size is getting low 9MB with dataset.
This test has no requirement to use low size so increase enough so test
should not fail with ENOSPACE.
Updated test to use PCMEM instead of using tmpfs.

Quick-build: true
Skip-unit-tests: true
Skip-unit-test-memcheck: true
Test-tag: nvme_object_multiple_pools

Signed-off-by: Samir Raval <samir.raval@intel.com>
sylviachanoiyee pushed a commit that referenced this pull request Mar 31, 2021
…ore space with current dataset. (#4928) (#5228)

Test is failing because of SCM size target size is getting low 9MB with dataset.
This test has no requirement to use low size so increase enough so test
should not fail with ENOSPACE.
Updated test to use PCMEM instead of using tmpfs.

Signed-off-by: Samir Raval <samir.raval@intel.com>
@ashleypittman ashleypittman mentioned this pull request Apr 28, 2021
@ashleypittman ashleypittman mentioned this pull request May 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

6 participants