Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to detect whether the node has storage #674

Closed
reenakabra opened this issue Nov 25, 2021 · 10 comments
Closed

Add support to detect whether the node has storage #674

reenakabra opened this issue Nov 25, 2021 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@reenakabra
Copy link

Is there a possibility that NFD can set label indicating the storage attached to node and type of storage. These are the disks other than the ones used by operating system and can be formatted and used by storage vendors. Information like the capacity and type of disks can be helpful.

Let me know your thoughts on this.

@reenakabra reenakabra added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 25, 2021
@marquiz
Copy link
Contributor

marquiz commented Nov 25, 2021

With #649 we added capability to detect block devices. Currently, we only detect few queue-related attributes (rotational, dax, nr_zones and zoned) but new attributes can easily be added if there is a clear use case for those.

Related to this, I don't see use adding any storage-specific labels that would be generated by default. Merely, the new block attributes are available for vendor/application specific custom labels (#464, #553). This functionality is new, unreleased and the documentation is scarce. But happy to help you out with that.

@reenakabra
Copy link
Author

Thanks marquiz. From where I was coming is, in case of kubernetes cluster some nodes can be storage only nodes and some can be compute only. It can be good if there is a way to find this using node labels (or some other way if you can suggest). Storage vendors can deploy their software on the nodes that can provision storage and skip the compute nodes that does not have storage.

@marquiz
Copy link
Contributor

marquiz commented Dec 1, 2021

@reenakabra is block device detection what you're after? Can you elaborate on what specific attributes your would be looking for. Just as an example, we could easily add for example size and/or device/model.

Generally, I think your use case would be satisfied by deploying a vendor/appication specific labeling rule that would examine certain block device attributes and create labels accordingly.

/cc zvonkok mythi

@apurv15
Copy link

apurv15 commented Dec 13, 2021

@marquiz : Set "non-OS-block-device=true" if there exists block devices which do not contain OS bits. NFD could look at output of "lsblk" and if there exist devices other than the one's which contain "/" or "/boot" mountpoints, set "non-OS-block-device=true".

@reenakabra
Copy link
Author

reenakabra commented Jan 5, 2022

Hi @marquiz , is it possible to set this label on the node through NFD?

@marquiz
Copy link
Contributor

marquiz commented Jan 5, 2022

You can write a custom hook or side-car container (or pod) to do the detection. See local feature source

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants