Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LFX'24] Add Sedna Joint inference HPA Proposal #457

Merged
merged 1 commit into from
Nov 28, 2024

Conversation

ajie65
Copy link
Contributor

@ajie65 ajie65 commented Nov 14, 2024

What type of PR is this?
/kind documentation

What this PR does / why we need it:
The rapid advancement of AI has led to the widespread application of deep learning models across various fields. However, the resource demands for model inference tasks can fluctuate significantly, especially during peak periods, posing a challenge to the system's computing capabilities. To address this varying load demand, we propose an elastic inference solution leveraging KubeEdge and Horizontal Pod Autoscaling (HPA) to enable dynamic scaling of inference tasks.

KubeEdge is an edge computing framework that extends Kubernetes' capabilities to edge devices, allowing applications to be deployed and managed on edge nodes. By utilizing KubeEdge, we can distribute inference tasks across different edge devices and cloud resources, achieving efficient resource utilization and task processing.

The core of collaborative inference lies in coordinating computing resources across various devices, allowing inference tasks to be dynamically allocated based on the current load. When the system detects an increase in load, the HPA mechanism automatically scales out the number of edge nodes or enhances resource configurations to meet the inference task demands. Conversely, when the load decreases, resource allocation is scaled down to reduce operational costs. This approach ensures optimal resource allocation while maintaining inference performance.
Which issue(s) this PR fixes:

Fixes #
kubeedge/kubeedge#5753

@kubeedge-bot kubeedge-bot added the kind/documentation Categorizes issue or PR as related to documentation. label Nov 14, 2024
@kubeedge-bot
Copy link
Collaborator

Welcome @ajie65! It looks like this is your first PR to kubeedge/sedna 🎉

@kubeedge-bot kubeedge-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 14, 2024
Signed-off-by: huang qi jie <huangqijie@gmail.com>
@MooreZheng
Copy link
Contributor

@tangming1996 @jaypume

@tangming1996
Copy link
Contributor

/lgtm

@kubeedge-bot kubeedge-bot added the lgtm Indicates that a PR is ready to be merged. label Nov 22, 2024
@MooreZheng
Copy link
Contributor

/lgtm

Copy link
Contributor

@MooreZheng MooreZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it looks good to me. The HPA could be used for

  1. resource reduction with demand dynamics, e.g., chatbots.
  2. release the burden of the resource budget on the design phase. That is, one might not need to calculate the exact cost of cloud resources when developing a new system.

Copy link
Contributor

@MooreZheng MooreZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The motivation example can be further designed considering in some cases where demand looks to be stable. For instance, in camera systems like the helmet detection that this proposal refers to, the number of cameras and the demand from the image streams are both rather stable. It then makes little sense to use such a technique for helmet detection.

As proposed in the routine meeting by @tangming1996, in this case, though the edge workers might have stable demand, the module hard examples can still lead to dynamic demand on the cloud workers. Then the motivation example of this proposal, in fact, can be further polished according to such a scenario.

@MooreZheng
Copy link
Contributor

/approve

@kubeedge-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: MooreZheng

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubeedge-bot kubeedge-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 28, 2024
@kubeedge-bot kubeedge-bot merged commit 6f0b2a4 into kubeedge:main Nov 28, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/documentation Categorizes issue or PR as related to documentation. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants