You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Type: inuse_space
Time: Jan 29, 2023 at 9:20am (-05)
Showing nodes accounting for 2064.64MB, 100% of 2064.64MB total
----------------------------------------------------------+-------------
flat flat% sum% cum cum% calls calls% + context
----------------------------------------------------------+-------------
1952.78MB 100% | rogchap.com/v8go.(*Value).String \go\pkg\mod\rogchap.com\v8go@v0.8.0\value.go:244 (inline)
1952.78MB 94.58% 94.58% 1952.78MB 94.58% | rogchap.com/v8go._Cfunc_GoStringN _cgo_gotypes.go:572
----------------------------------------------------------+-------------
Call graph:
We have 12 nodes each running a pool of 128 isolates.
The service uses v8go to process events at a rate of ~ 40 req/s per node.
We are reliably closing contexts after every event.
We are reliably disposing of each isolate after it processes ~100 events or when its heap exceeds 20MB.
We call runtime.GC() manually after every isolate disposal.
The new GOMEMLIMIT parameter has no effect on memory growth.
Memory growth remains unbounded, leading nodes to be OOMKilled by Kubernetes.
Downgrading to v8go@v0.7.0 does not solve the issue.
Type: inuse_space
Time: Jan 29, 2023 at 10:01am (-05)
Showing nodes accounting for 605.62MB, 100% of 605.62MB total
----------------------------------------------------------+-------------
flat flat% sum% cum cum% calls calls% + context
----------------------------------------------------------+-------------
565.05MB 100% | rogchap.com/v8go.(*Value).String \go\pkg\mod\rogchap.com\v8go@v0.7.0\value.go:246 (inline)
565.05MB 93.30% 93.30% 565.05MB 93.30% | rogchap.com/v8go._Cfunc_GoString _cgo_gotypes.go:546
----------------------------------------------------------+-------------
Has anyone else experienced this behavior before?
The text was updated successfully, but these errors were encountered:
Calling String() on
info.Args()
in callback causesv8go@v0.8.0
to leak memory.pprof output:
Call graph:

We have 12 nodes each running a pool of 128 isolates.
The service uses v8go to process events at a rate of ~ 40 req/s per node.
We are reliably closing contexts after every event.
We are reliably disposing of each isolate after it processes ~100 events or when its heap exceeds 20MB.
We call
runtime.GC()
manually after every isolate disposal.The new
GOMEMLIMIT
parameter has no effect on memory growth.Memory growth remains unbounded, leading nodes to be OOMKilled by Kubernetes.
Downgrading to
v8go@v0.7.0
does not solve the issue.Has anyone else experienced this behavior before?
The text was updated successfully, but these errors were encountered: