-
Notifications
You must be signed in to change notification settings - Fork 63
Doppler out of memory when scaled #241
Comments
@viovanov blocker for 1.0? |
The problem here seems to be with log-cache. In my initial investigation, I couldn't find the exact problem. For the 0.2 release, I'm going to restrict HA for doppler. We should get back to it for 1.0 though. |
I changed the priority from Critical to High given that there is a temporary fix in place already. |
I wouldn't call "don't scale it" a temporary fix for "this can't be scaled". |
@loewenstein Do you have spare cycles to help to debug this? |
Seems like the original solution that was proposed is not going to work out. An issue has been filed upstream though it might take some time to be fixed as the root cause hasn't been identified yet. Since the problematic part is the |
As discussed with @viovanov, we will keep it as is for now. |
Upstream's VM deployments don't seem to be suffering from this issue based on discussions. We should implement the split for the log cache from doppler. |
Describe the bug
When scaling to two Doppler instances and deploying an application, the memory consumption increases until either the k8s node crashes or Doppler is killed by the OOMKiller.
To Reproduce
Install kubecf master with cf-operator master.
Run
cf push
with the example Dora app.Expected behavior
Doppler should consume reasonable amount of memory.
Environment
The text was updated successfully, but these errors were encountered: