You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should use headless services. We don't need load balancing since there is a single pod that is the backend for each service. I think this should provide some performance benefits but I don't know how much.
The text was updated successfully, but these errors were encountered:
@jlewi I think LB is sort of useful for TensorFlow Serving jobs. Generally we launch a model (eg: face recognition) in a TensorFlow Serving Pod when the number of requests is bearable. But, with the number of requests increasing, a single pod is not enough to support them. Based on the assumption, I prefer launching TensorFlow Serving jobs as Deployment. On the one hand, Deployment is good at scaling up and down. On the other hand, Deployment would recover the dead pod automatically.
Do you have other suggestions on how we might improve networking efficiency?
I think the point is the implementation of pod networks. I know many Kubernetes users set Flannel overlay network by default, but Flannel is not a good choice for TensorFlow and other DL workload. If we really want to improve the networking efficiency, we'd better use other network implementations, such as host network.
jlewi
changed the title
Use headless services
Use headless services for Training jobs
Dec 7, 2017
Sorry I should have clarified that for headless services I only meant in the context of training jobs. For training jobs we need to assign stable names to each replica. So for a given replica there should be only 1 pod backing it. So I think load balancers are just introducing overhead.
Regarding network performance, is there a simple benchmark that can be run to measure network performance in a way that's relevant to TF/DL?
We should use headless services. We don't need load balancing since there is a single pod that is the backend for each service. I think this should provide some performance benefits but I don't know how much.
The text was updated successfully, but these errors were encountered: