You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Programs that run millions of concurrent goroutines (i.e. busy web servers, game servers, rpc servers) are usually sensitive to the amount of memory occupied by goroutines' stacks.
For instance, we currently have a web service running up to a million of concurrent goroutines per physical machine. It eats up to 4Gb of stack per machine and it is OK. But when we tried enabling gzip compression the first time, we immediately hit #18625 , that increased per-machine stack usage to more than 40Gb. We couldn't quickly figure out who ate the stack, so resorted to a hack instead of fixing the issue in the compress/flate package.
It would be great to have stack size profiler, which could show stack traces that led to the peak stack usage, so stack hogs could be easily detected and fixed.
The text was updated successfully, but these errors were encountered:
Programs that run millions of concurrent goroutines (i.e. busy web servers, game servers, rpc servers) are usually sensitive to the amount of memory occupied by goroutines' stacks.
For instance, we currently have a web service running up to a million of concurrent goroutines per physical machine. It eats up to 4Gb of stack per machine and it is OK. But when we tried enabling gzip compression the first time, we immediately hit #18625 , that increased per-machine stack usage to more than 40Gb. We couldn't quickly figure out who ate the stack, so resorted to a hack instead of fixing the issue in the
compress/flate
package.It would be great to have stack size profiler, which could show stack traces that led to the peak stack usage, so stack hogs could be easily detected and fixed.
The text was updated successfully, but these errors were encountered: