Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16930 pool: Share map bulk resources #15763

Merged
merged 5 commits into from
Jan 24, 2025
Merged

Conversation

liw
Copy link
Contributor

@liw liw commented Jan 22, 2025

Improve concurrent POOL_QUERY, POOL_CONNECT, and POOL_TGT_QUERY_MAP efficiency by giving them a chance to share the same pool map buffer and pool map buffer bulk handle.

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

Ticket title is 'Pool query fail on some pool with error "DER_NOMEM(-1009): Out of memory"'
Status is 'In Progress'
Labels: 'ALCF,post_acceptance_issues'
https://daosio.atlassian.net/browse/DAOS-16930

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15763/1/execution/node/370/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15763/1/execution/node/301/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15763/1/execution/node/305/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15763/1/execution/node/261/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15763/1/execution/node/345/log

@liw liw force-pushed the liw/enomem-workaround branch from 8b8e8fb to 41f89fd Compare January 23, 2025 01:34
{
D_ASSERT(dss_get_module_info()->dmi_xs_id == 0);

/* We could cache this longer, actually. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I don't quite see why we have to invalidate the bulk cache here? I think we just need to invalid it when changing pool map, no? (the change you made oin ds_pool_tgt_map_update()).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also not quite understand why map_bc_put() 2 times below if (pool->sp_map_bc == map_bc)? then the cache is invalidated even no pool map change right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there's problem. Let me update...

Copy link
Contributor

@liuxuezhao liuxuezhao Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the logic looks correct although. current code looks just don't want to ping the buf if no active query in long time.


D_ASSERT(dss_get_module_info()->dmi_xs_id == 0);

/* For accessing pool->sp_map, but not really necessary. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, on XS0 and no yield, the lock seems not must to take. but looks fine.

{
D_ASSERT(dss_get_module_info()->dmi_xs_id == 0);

/* We could cache this longer, actually. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also not quite understand why map_bc_put() 2 times below if (pool->sp_map_bc == map_bc)? then the cache is invalidated even no pool map change right?

liuxuezhao
liuxuezhao previously approved these changes Jan 23, 2025
{
D_ASSERT(dss_get_module_info()->dmi_xs_id == 0);

/* We could cache this longer, actually. */
Copy link
Contributor

@liuxuezhao liuxuezhao Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the logic looks correct although. current code looks just don't want to ping the buf if no active query in long time.

Improve concurrent POOL_QUERY, POOL_CONNECT, and POOL_TGT_QUERY_MAP
efficiency by giving them a chance to share the same pool map buffer and
pool map buffer bulk handle.

Signed-off-by: Li Wei <liwei@hpe.com>
Required-githooks: true
@liw liw force-pushed the liw/enomem-workaround branch from 43e0391 to 2491a37 Compare January 23, 2025 05:45
@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15763/4/testReport/

Introduce pool space query on service leader to avoid space query
flooding. The pool space cache expiration time is 2 seconds by default,
one can change the expiration time via DAOS_POOL_SPACE_CACHE_INTVL, if
the expiration time is set to zero, space cache will be disabled.

Signed-off-by: Niu Yawei <yawei.niu@intel.com>
@NiuYawei NiuYawei mentioned this pull request Jan 23, 2025
18 tasks
liuxuezhao
liuxuezhao previously approved these changes Jan 23, 2025

static void
map_bc_put(struct ds_pool_map_bc *map_bc)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[minor] D_ASSERT(map_bc->pic_ref > 0)

wangshilong
wangshilong previously approved these changes Jan 23, 2025
@wangshilong wangshilong requested a review from NiuYawei January 23, 2025 09:13
if (pool->sp_map_bc == NULL) {
int rc;

rc = map_bc_create(ctx, pool->sp_map, &pool->sp_map_bc);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is read lock OK in this case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as the code only execute on XS0 and no yield, looks need not lock

@daosbuild1
Copy link
Collaborator

Test stage Functional Hardware Medium Verbs Provider completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15763/5/testReport/

Serialize pool space query when space cache is enabled.
Change CI global config to disable pool space cache since some tests may
need to verify instant pool space changes.

Signed-off-by: Niu Yawei <yawei.niu@hpe.com>
Co-authored-by: Xuezhao Liu <xuezhao.liu@hpe.com>
Co-authored-by: Liang Zhen <liang.zhen@hpe.com>
@NiuYawei NiuYawei dismissed stale reviews from wangshilong and liuxuezhao via c0764a7 January 24, 2025 04:18
@NiuYawei NiuYawei marked this pull request as ready for review January 24, 2025 04:18
@NiuYawei NiuYawei requested review from a team as code owners January 24, 2025 04:18
@NiuYawei NiuYawei requested review from a team as code owners January 24, 2025 04:18
Copy link
Contributor

@daltonbohning daltonbohning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pylint needs to be resolved

Skip-build: true

Signed-off-by: Dalton Bohning <dalton.bohning@hpe.com>
@daltonbohning
Copy link
Contributor

pylint needs to be resolved

Previous CI run was good:
https://build.hpdd.intel.com/blue/organizations/jenkins/daos-stack%2Fdaos/detail/PR-15763/6/pipeline

I pushed a simple fix for the lint with most CI skipped

@mchaarawi mchaarawi merged commit 0f05b2f into release/2.6 Jan 24, 2025
36 of 40 checks passed
@mchaarawi mchaarawi deleted the liw/enomem-workaround branch January 24, 2025 17:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

8 participants