-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LFX'24] Add Sedna Joint inference HPA Proposal #457
Conversation
Welcome @ajie65! It looks like this is your first PR to kubeedge/sedna 🎉 |
Signed-off-by: huang qi jie <huangqijie@gmail.com>
87fa9ce
to
dc6e801
Compare
/lgtm |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall it looks good to me. The HPA could be used for
- resource reduction with demand dynamics, e.g., chatbots.
- release the burden of the resource budget on the design phase. That is, one might not need to calculate the exact cost of cloud resources when developing a new system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The motivation example can be further designed considering in some cases where demand looks to be stable. For instance, in camera systems like the helmet detection that this proposal refers to, the number of cameras and the demand from the image streams are both rather stable. It then makes little sense to use such a technique for helmet detection.
As proposed in the routine meeting by @tangming1996, in this case, though the edge workers might have stable demand, the module hard examples can still lead to dynamic demand on the cloud workers. Then the motivation example of this proposal, in fact, can be further polished according to such a scenario.
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MooreZheng The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind documentation
What this PR does / why we need it:
The rapid advancement of AI has led to the widespread application of deep learning models across various fields. However, the resource demands for model inference tasks can fluctuate significantly, especially during peak periods, posing a challenge to the system's computing capabilities. To address this varying load demand, we propose an elastic inference solution leveraging KubeEdge and Horizontal Pod Autoscaling (HPA) to enable dynamic scaling of inference tasks.
KubeEdge is an edge computing framework that extends Kubernetes' capabilities to edge devices, allowing applications to be deployed and managed on edge nodes. By utilizing KubeEdge, we can distribute inference tasks across different edge devices and cloud resources, achieving efficient resource utilization and task processing.
The core of collaborative inference lies in coordinating computing resources across various devices, allowing inference tasks to be dynamically allocated based on the current load. When the system detects an increase in load, the HPA mechanism automatically scales out the number of edge nodes or enhances resource configurations to meet the inference task demands. Conversely, when the load decreases, resource allocation is scaled down to reduce operational costs. This approach ensures optimal resource allocation while maintaining inference performance.
Which issue(s) this PR fixes:
Fixes #
kubeedge/kubeedge#5753