-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod to Pod Bandwidth Reservation #197
Comments
yeah, the holy grail of universal traffic shaping :) I don't have an exact design for this, but such a solution should adhere to some basic requirements:
|
@Levovar Thank You for detailed insights. Indeed helpful. You mentioned that you had considered this internally. What things did you consider and what blockers you guys faced, if any. Also, what were the reasons not to pursue this until now. Just trying to get as much information as possible, before we proceed. |
@mudverma well, besides that it always seems like we have higher priority items, I think we first wanted to make sure that if such a functionality is introduced, not everyone can influence, or change these settings once applied. So the new (ClusterNetwork, TenantNetwork) APIs were always considered higher prio, but that is done now. but I think it is probably anyway better to separate these needs, one API for shaping, one for policing, with their own, independent, optional controllers. if you do really implement such functionality, I think it would be best to have an online design discussion first once you have some ideas regarding APIs / architecture etc. |
Proposal/Question:
For Bandwidth sensitive telco and or video/audio streaming applications we need to block bandwidth between pods. This is specially needed in service chaining scenarios.
Currently calico supports local traffic shaping (bandwidth reservation) per pod. What we want is bandwidth reservation between pods.
Can this be achieved with current implementation? If not, we are considering adding such a feature.
We want to take community opinion on this.
The text was updated successfully, but these errors were encountered: