-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Hardware Accelerators #192
Comments
cc @aronchick for priority |
s/accelerators/device assignment please? /cc @derekwaynecarr |
regarding |
/subscribe |
@k82cn yes. Actually per sig meeting yesterday, any PCI device (most tend to be accelerators but I'd personally prefer more generic wording). Note that Intel has "accelerators" inside their CPUs (called CPU extensions). All of these things should become candidates for scheduler match making. |
related kubernetes/community#414 |
My understanding is that,
|
Is the scope limited to accelerators or some co-processors like TPM etc?
If the hardware discovery is a functionality that we are targeting, shouldn't scope be broadened to all types of devices(including accelerators)? |
This issue is not meant to support arbitrary third party devices which I
believe warrants an issue by itself. Node Feature Discovery attempts to
solve the device discovery problem to an extent.
…On Wed, Mar 1, 2017 at 2:26 PM, ravig ***@***.***> wrote:
Is the scope limited to accelerators or some co-processors like TPM etc?
My understanding is that,
1. There needs to be a way to discover, represent and consume
Accelerators as a resource in Kubernetes
If the hardware discovery is a functionality that we are targeting,
shouldn't scope be broadened to all types of devices(including
accelerators)?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#192 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGvIKI5igGmT1xdSyaC9BAPC3f9y0RZAks5rhfB6gaJpZM4MO8fm>
.
|
Can we use the term "hardware accelerators"? I was really confused by this issue at first. |
Good proposal! I think topology support for deivce is a must. For example, nvidia GPUs on different PCI bridge can not talk p2p. |
ping @calebamiles to review |
One of the critical pieces of this problem is Hardware device plugins landed in v1.8 #368. |
@vishh is it still alpha for 1.9? Also, can you update the feature template to follow the new format? https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md |
It is still alpha for 1.9. |
@vishh 👋 Please indicate in the 1.9 feature tracking board |
@vishh Bump for docs ☝️ /cc @idvoretskyi |
Automatic merge from submit-queue (batch tested with PRs 56681, 57384). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Deprecate the alpha Accelerators feature gate. Encourage people to use DevicePlugins instead. /kind cleanup Related to kubernetes/enhancements#192 and kubernetes/enhancements#368 **Release note**: ```release-note The alpha Accelerators feature gate is deprecated and will be removed in v1.11. Please use device plugins instead. They can be enabled using the DevicePlugins feature gate. ``` /sig node /sig scheduling /area hw-accelerators
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@vishh If so, can you please ensure the feature is up-to-date with the appropriate:
cc @idvoretskyi |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
…able-md Fix incorrect link
Update Kuryr information on SSC doc
Description
Kubernetes is becoming popular for managing workloads that consume accelerators like Tensorflow for example. The agility that Kubernetes offers makes it easy to consume accelerators across a fleet of machines.
Kubernetes can provide an end to end workflow by separating provisioning and configuration of accelerators from consumption.
Progress Tracker
@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off- Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
- cc
@kubernetes/docs
on docs PR- cc
@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/api
- cc
@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
@kubernetes/feature-reviewers
on this issue to get approval before checking this offFEATURE_STATUS is used for feature tracking and to be updated by
@kubernetes/feature-reviewers
.FEATURE_STATUS: IN_DEVELOPMENT
cc @kubernetes/sig-node-feature-requests @kubernetes/sig-scheduling-feature-requests
The text was updated successfully, but these errors were encountered: