Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cue OOM killer #147

Closed
jeffmccune opened this issue Apr 25, 2024 · 7 comments
Closed

Cue OOM killer #147

jeffmccune opened this issue Apr 25, 2024 · 7 comments

Comments

@jeffmccune
Copy link
Contributor

jeffmccune commented Apr 25, 2024

Problem:

On my 32GiB workstation with 1G swap, the following command results in multiple processes consuming over 30% of system memory. The linux oom killer kicks in and starts sending kill -9's to random processes.

./hack/render-all | time xargs -P8 -I% bash -c %

Solution:

???

Result:
We have a way to limit memory usage. It's acceptable to run holos in parallel, but we need to get usage under 4Gi otherwise we won't be able to run it inside of pods with reasonable resource limits in place.

Where to start

Note

See CUE_STATS_FILE which is undocumented, but may be useful.

Note Location
Best guess, #ProjectHosts is brutal and enumerates all hosts for a project ProjectHosts
Possible, #EnvHosts is brutal but it shouldn't be used much since it was the first stab for httpbin - project-template.cue EnvHosts
Highly suspect - mesh.cue for FQDN, Host in ProjectHosts
ClusterDefaultGatewayServers - Suspect gateway.cue
Not used: Auth Proxy for each stage meshconfig.cue
Probably small platform-projects.cue
Leaf node - not likely a problem workload projects.cue
Leaf node - not likely a problem provisioner projects.cue
@jeffmccune
Copy link
Contributor Author

It might be the way holos render loops over cue instances when the user specifies /... at the end of the build arguments causes memory to balloon.

A quick fix might be, "don't use /..." and instead, call holos render as a separate process for each holos component.

Is there a memory ballooning problem if each holos component were rendered as individual commands instead of one catch all

jeffmccune added a commit that referenced this issue Apr 25, 2024
To enumerate all of the instances that could be run in separate
processes with xargs instead of run in the for loop in the Builder Run
method.
@jeffmccune
Copy link
Contributor Author

jeffmccune commented Apr 25, 2024

I'll leave it here with you nate. I think my hypothesis in the above comment is probably the quick band-aid fix.

The hypothesis holds. When running with /... the garbage collector struggles to keep up and the goal balloons up to ~10GiB used

GODEBUG=gctrace=1 holos render --cluster-name=k2 /home/jeff/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/foundation/cloud/...
gc 152 9244 MB goal
gc 144 @10.393s 1%: 0.037+114+0.060 ms clock, 0.30+0.15/229/595+0.48 ms cpu, 1061->1078->754 MB, 1131 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 145 @11.796s 1%: 0.041+133+0.009 ms clock, 0.33+0.23/266/688+0.074 ms cpu, 1415->1437->893 MB, 1509 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 146 @13.411s 1%: 0.039+190+0.042 ms clock, 0.31+0.18/381/951+0.33 ms cpu, 1676->1714->1218 MB, 1788 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 147 @15.721s 2%: 0.038+238+0.052 ms clock, 0.30+0.23/475/1205+0.41 ms cpu, 2308->2353->1523 MB, 2438 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 148 @18.587s 2%: 0.041+328+0.007 ms clock, 0.32+0.21/655/1638+0.061 ms cpu, 2884->2951->2071 MB, 3048 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 149 @22.523s 2%: 0.042+426+0.009 ms clock, 0.33+0.18/851/2124+0.079 ms cpu, 3910->3995->2689 MB, 4144 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 150 @27.529s 2%: 0.037+623+0.009 ms clock, 0.30+0.18/1246/3121+0.076 ms cpu, 5076->5174->3601 MB, 5380 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 151 @34.300s 2%: 0.044+738+0.041 ms clock, 0.35+0.21/1474/3671+0.33 ms cpu, 6793->6946->4621 MB, 7203 MB goal, 0 MB stacks, 1 MB globals, 8 P
gc 152 @43.301s 2%: 0.047+943+0.010 ms clock, 0.37+0.15/1886/4695+0.086 ms cpu, 8712->8900->5866 MB, 9244 MB goal, 0 MB stacks, 1 MB globals, 8 P
2:07PM INF render.go:70 rendered prod-platform-argocd version=0.70.0 cluster=k2 status=ok action=rendered name=prod-platform-argocd
2:07PM INF render.go:70 rendered prod-mesh-certmanager version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-certmanager
2:07PM INF render.go:70 rendered prod-github-arc-runner version=0.70.0 cluster=k2 status=ok action=rendered name=prod-github-arc-runner
2:07PM INF render.go:70 rendered prod-github-arc-system version=0.70.0 cluster=k2 status=ok action=rendered name=prod-github-arc-system
2:07PM INF render.go:70 rendered prod-secrets-namespaces version=0.70.0 cluster=k2 status=ok action=rendered name=prod-secrets-namespaces
2:07PM INF render.go:70 rendered prod-mesh-cni version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-cni
2:07PM INF render.go:70 rendered prod-mesh-gateway version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-gateway
2:07PM INF render.go:70 rendered prod-mesh-httpbin version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-httpbin
2:07PM INF render.go:70 rendered prod-mesh-ingress version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-ingress
2:07PM INF render.go:70 rendered prod-mesh-istiod version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-istiod
2:07PM INF render.go:70 rendered prod-mesh-istio-base version=0.70.0 cluster=k2 status=ok action=rendered name=prod-mesh-istio-base
2:07PM INF render.go:70 rendered prod-platform-obs version=0.70.0 cluster=k2 status=ok action=rendered name=prod-platform-obs
2:07PM INF render.go:70 rendered prod-pgo-controller version=0.70.0 cluster=k2 status=ok action=rendered name=prod-pgo-controller
2:07PM INF render.go:70 rendered prod-pgo-crds version=0.70.0 cluster=k2 status=ok action=rendered name=prod-pgo-crds
2:07PM INF render.go:70 rendered prod-secrets-eso version=0.70.0 cluster=k2 status=ok action=rendered name=prod-secrets-eso
2:07PM INF render.go:70 rendered prod-secrets-eso-creds-refresher version=0.70.0 cluster=k2 status=ok action=rendered name=prod-secrets-eso-creds-refresher
2:07PM INF render.go:70 rendered prod-secrets-stores version=0.70.0 cluster=k2 status=ok action=rendered name=prod-secrets-stores
2:07PM INF render.go:70 rendered prod-secrets-validate version=0.70.0 cluster=k2 status=ok action=rendered name=prod-secrets-validate

However, we can use the --print-instances flag I added in PR #148 to spread these out into multiple processes:

holos render --cluster-name=k2 /home/jeff/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/foundation/cloud/... --print-instances \
  | GODEBUG=gctrace=1 xargs -t -P1 -I% holos render --cluster-name=k2 % 2>&1 | tee foo.txt

https://gist.github.com/jeffmccune/bf8f634f7462b1916e7a0d3383e3d354

Quick scan of this gist provides some insight. Some components don't take much memory at all.

prod-mesh-gatewaytakes a lot, 2463 MB goal at the end

prod-platform-obs is only 58 MB goal, maybe it completely bypasses the projects structures?

etc...

Overall though it is a quick win to spread the components out, the largest balloon is around 2 GiB instead of 10GiB.

@natemccurdy
Copy link
Contributor

In Slack, Jeff mentioned that https://github.com/holos-run/holos/blob/v0.70.0/docs/examples/platforms/reference/clusters/foundation/cloud/mesh/mesh.cue#L11 might be a big contributor to memory requirements.

That is the single auth proxy we use for everything
I bet that's the culprit.

@natemccurdy
Copy link
Contributor

I processed the GC logs while rendering each cluster's individual instances and found that the foundation/cloud/mesh/ and provisioner/projects paths are, by an order of magnitude, the biggest memory hogs.

Render memory usage per instance per cluster
MB    CLUSTER      PATH
-----------------------------------------------------------------------------------
2581  core2        platforms/reference/clusters/foundation/cloud/mesh/istio/gateway
2544  provisioner  platforms/reference/clusters/provisioner/projects
2540  core1        platforms/reference/clusters/foundation/cloud/mesh/istio/gateway
2217  k1           platforms/reference/clusters/foundation/cloud/mesh/istio/gateway
2112  k2           platforms/reference/clusters/foundation/cloud/mesh/istio/gateway
1349  k1           platforms/reference/clusters/foundation/cloud/mesh/istio/ingress
1294  core2        platforms/reference/clusters/foundation/cloud/mesh/istio/ingress
1236  core2        platforms/reference/clusters/foundation/cloud/mesh/istio/istiod
1221  k2           platforms/reference/clusters/foundation/cloud/mesh/istio/ingress
1215  core1        platforms/reference/clusters/foundation/cloud/mesh/istio/istiod
1182  core2        platforms/reference/clusters/foundation/cloud/mesh/istio-base
1171  core1        platforms/reference/clusters/foundation/cloud/mesh/istio/httpbin
1163  core1        platforms/reference/clusters/foundation/cloud/mesh
1153  k2           platforms/reference/clusters/foundation/cloud/mesh/istio/cni
1139  k1           platforms/reference/clusters/foundation/cloud/mesh/istio-base
1123  k2           platforms/reference/clusters/foundation/cloud/mesh/istio-base
1119  core1        platforms/reference/clusters/foundation/cloud/mesh/istio/cni
1112  k2           platforms/reference/clusters/foundation/cloud/mesh/istio/istiod
1107  core1        platforms/reference/clusters/foundation/cloud/mesh/istio/ingress
1101  core1        platforms/reference/clusters/foundation/cloud/mesh/istio-base
1045  k2           platforms/reference/clusters/foundation/cloud/mesh
1039  core1        platforms/reference/clusters/foundation/cloud/mesh/istio
1030  core2        platforms/reference/clusters/foundation/cloud/mesh/istio
1010  k1           platforms/reference/clusters/foundation/cloud/mesh
983   k2           platforms/reference/clusters/foundation/cloud/mesh/istio/httpbin
972   k1           platforms/reference/clusters/foundation/cloud/mesh/istio/istiod
959   core2        platforms/reference/clusters/foundation/cloud/mesh/istio/cni
955   k1           platforms/reference/clusters/foundation/cloud/mesh/istio
945   core2        platforms/reference/clusters/foundation/cloud/mesh
931   k2           platforms/reference/clusters/foundation/cloud/mesh/istio
930   k1           platforms/reference/clusters/foundation/cloud/mesh/istio/httpbin
908   k1           platforms/reference/clusters/foundation/cloud/mesh/istio/cni
898   core2        platforms/reference/clusters/foundation/cloud/mesh/istio/httpbin
88    provisioner  platforms/reference/clusters/provisioner/secrets/eso-creds-refresher
61    k1           platforms/reference/clusters/foundation/cloud/obs
54    k2           platforms/reference/clusters/foundation/cloud/obs
54    core2        platforms/reference/clusters/foundation/cloud/secrets/secretstores
53    core1        platforms/reference/clusters/foundation/cloud/secrets/secretstores
53    core1        platforms/reference/clusters/foundation/cloud/github/arc/system
51    core1        platforms/reference/clusters/foundation/cloud/argocd
50    k2           platforms/reference/clusters/foundation/cloud/secrets/secretstores
50    k1           platforms/reference/clusters/foundation/cloud/secrets/secretstores
49    core2        platforms/reference/clusters/foundation/cloud/github/arc/system
49    core1        platforms/reference/clusters/foundation/cloud/obs
46    core2        platforms/reference/clusters/foundation/cloud/obs
46    core2        platforms/reference/clusters/foundation/cloud/argocd
45    k2           platforms/reference/clusters/foundation/cloud/github/arc/system
43    k1           platforms/reference/clusters/foundation/cloud/argocd
41    k2           platforms/reference/clusters/foundation/cloud/pgo/crds
41    k1           platforms/reference/clusters/foundation/cloud/secrets/eso-creds-refresher
41    k1           platforms/reference/clusters/foundation/cloud/github/arc/system
39    provisioner  platforms/reference/clusters/provisioner/mesh/certificates
39    k2           platforms/reference/clusters/foundation/cloud/argocd
38    provisioner  platforms/reference/clusters/provisioner/secrets/namespaces
38    core2        platforms/reference/clusters/accounts/iam/zitadel/zitadel
37    core1        platforms/reference/clusters/foundation/metal/ceph
36    core1        platforms/reference/clusters/foundation/cloud/github/arc/runner
35    k1           platforms/reference/clusters/foundation/cloud/init/namespaces
34    k2           platforms/reference/clusters/foundation/cloud/secrets
34    k1           platforms/reference/clusters/foundation/cloud/secrets
34    k1           platforms/reference/clusters/foundation/cloud/pgo/controller
34    core2        platforms/reference/clusters/foundation/cloud/certmanager
34    core1        platforms/reference/clusters/optional/vault
34    core1        platforms/reference/clusters/accounts/iam/zitadel/zitadel
34    core1        platforms/reference/clusters/accounts/iam/monitoring
33    provisioner  platforms/reference/clusters/provisioner/mesh/istio/ingress
33    provisioner  platforms/reference/clusters/provisioner/mesh/certmanager
33    provisioner  platforms/reference/clusters/provisioner/iam/zitadel
33    core2        platforms/reference/clusters/optional/vault
33    core2        platforms/reference/clusters/foundation/cloud/secrets/validate
33    core2        platforms/reference/clusters/foundation/cloud/secrets/eso
33    core2        platforms/reference/clusters/foundation/cloud/secrets
33    core2        platforms/reference/clusters/foundation/cloud/pgo/crds
33    core2        platforms/reference/clusters/accounts/iam/monitoring
33    core1        platforms/reference/clusters/workload/projects
33    core1        platforms/reference/clusters/foundation/cloud/pgo
33    core1        platforms/reference/clusters/accounts/iam/zitadel/postgres
32    provisioner  platforms/reference/clusters/provisioner/iam
32    k2           platforms/reference/clusters/workload/projects
32    k2           platforms/reference/clusters/foundation/cloud/secrets/eso-creds-refresher
32    k2           platforms/reference/clusters/foundation/cloud/init/namespaces
32    k2           platforms/reference/clusters/foundation/cloud/certmanager
32    k1           platforms/reference/clusters/foundation/cloud/github/arc/runner
32    core2        platforms/reference/clusters/foundation/metal/ceph
32    core2        platforms/reference/clusters/accounts/iam/zitadel/postgres
32    core2        platforms/reference/clusters/accounts/iam
32    core1        platforms/reference/clusters/foundation/cloud/pgo/crds
31    provisioner  platforms/reference/clusters/provisioner/platform-issuer
31    k2           platforms/reference/clusters/foundation/cloud/secrets/eso
31    k2           platforms/reference/clusters/foundation/cloud/github/arc
31    k1           platforms/reference/clusters/foundation/cloud/pgo/crds
31    core2        platforms/reference/clusters/accounts/iam/zitadel
31    core1        platforms/reference/clusters/foundation/cloud/secrets/validate
31    core1        platforms/reference/clusters/foundation/cloud/init/namespaces
31    core1        platforms/reference/clusters/accounts/iam
30    k1           platforms/reference/clusters/foundation/cloud/secrets/eso
30    k1           platforms/reference/clusters/foundation/cloud/certmanager
30    core2        platforms/reference/clusters/foundation/cloud/init/namespaces
30    core1        platforms/reference/clusters/foundation/cloud/secrets
29    provisioner  platforms/reference/clusters/provisioner/mesh
29    provisioner  platforms/holos-saas/clusters/provisioner/choria/provisioner
29    k2           platforms/reference/clusters/foundation/metal/ceph
29    k2           platforms/holos-saas/clusters/workload/nats/crds
29    k1           platforms/reference/clusters/foundation/metal/ceph
29    k1           platforms/reference/clusters/foundation/cloud/github/arc
29    core2        platforms/reference/clusters/foundation/cloud/pgo
29    core1        platforms/reference/clusters/foundation/cloud/certmanager
28    provisioner  platforms/reference/clusters/provisioner/mesh/issuers
28    provisioner  platforms/reference/clusters/provisioner/iam/zitadel/postgres-certs
28    provisioner  platforms/holos-saas/clusters/provisioner/choria/broker
28    k2           platforms/reference/clusters/foundation/cloud/secrets/validate
28    k2           platforms/reference/clusters/foundation/cloud/pgo/controller
28    k2           platforms/reference/clusters/foundation/cloud/pgo
28    k2           platforms/reference/clusters/foundation/cloud/github/arc/runner
28    k2           platforms/holos-saas/clusters/workload/nats/envs
28    k2           platforms/holos-saas/clusters/workload/choria/provisioner
28    k2           platforms/holos-saas/clusters/workload/choria/broker
28    k1           platforms/reference/clusters/workload/projects
28    k1           platforms/reference/clusters/foundation/cloud/secrets/validate
28    k1           platforms/reference/clusters/foundation/cloud/pgo
28    core2        platforms/reference/clusters/workload/projects
28    core2        platforms/reference/clusters/foundation/cloud/secrets/eso-creds-refresher
28    core2        platforms/reference/clusters/foundation/cloud/pgo/controller
28    core2        platforms/reference/clusters/foundation/cloud/github/arc/runner
28    core2        platforms/reference/clusters/foundation/cloud/github/arc
28    core2        platforms/reference/clusters/accounts/iam/zitadel/postgres-certs
28    core1        platforms/reference/clusters/foundation/cloud/secrets/eso-creds-refresher
28    core1        platforms/reference/clusters/foundation/cloud/secrets/eso
28    core1        platforms/reference/clusters/foundation/cloud/pgo/controller
28    core1        platforms/reference/clusters/foundation/cloud/github/arc
28    core1        platforms/reference/clusters/accounts/iam/zitadel/postgres-certs
28    core1        platforms/reference/clusters/accounts/iam/zitadel
Render memory-usage logs were generated with this script
#!/bin/bash

: "${HOLOS_REPO:=${HOME}/workspace/holos-run/holos}"
: "${LOG_DIR:=${HOME}/Desktop/render-logs}"

[[ -d $LOG_DIR ]] || mkdir -p "$LOG_DIR"

# Provisioner
for cluster in provisioner; do
  for platform in reference holos-saas; do
    holos render --print-instances --cluster-name=$cluster "${HOLOS_REPO}/docs/examples/platforms/${platform}/clusters/provisioner/..." \
      | GODEBUG=gctrace=1 xargs -P1 -t -L1 time holos render --cluster-name=$cluster 2>&1 \
      | tee "${LOG_DIR}/render-log-instances-${platform}-${cluster}.txt"
  done
done

# Workload clusters
for cluster in k1 k2; do
  for cluster_type in foundation workload; do
    holos render --print-instances --cluster-name=$cluster "${HOLOS_REPO}/docs/examples/platforms/reference/clusters/${cluster_type}/..." \
      | GODEBUG=gctrace=1 xargs -P1 -t -L1 time holos render --cluster-name=$cluster 2>&1 \
      | tee "${LOG_DIR}/render-log-instances-${cluster_type}-${cluster}.txt"
  done
done

# core1 and core2
for cluster in core1 core2; do
  for cluster_type in accounts foundation workload optional; do
    holos render --print-instances --cluster-name=$cluster "${HOLOS_REPO}/docs/examples/platforms/reference/clusters/${cluster_type}/..." \
      | GODEBUG=gctrace=1 xargs -P1 -t -L1 time holos render --cluster-name=$cluster 2>&1 \
      | tee "${LOG_DIR}/render-log-instances-${cluster_type}-${cluster}.txt"
  done
done

# Holos Saas
for cluster in k2; do
  for platform in holos-saas; do
    holos render --print-instances --cluster-name=$cluster "${HOLOS_REPO}/docs/examples/platforms/${platform}/clusters/workload/..." \
      | GODEBUG=gctrace=1 xargs -P1 -t -L1 time holos render --cluster-name=$cluster 2>&1 \
      | tee "${LOG_DIR}/render-log-instances-${platform}-${cluster}.txt"
  done
done
./hack/find-mem-hogs.py ~/Desktop/render-logs/*.txt | sort -rn | column -t
#!/usr/bin/env python3
import sys


# Exmaple line to get the section name from, which is a combination of the cluster name and the file path:
#   time holos render --cluster-name=core1 /Users/nate/src/holos-run/holos/docs/examples/platforms/reference/clusters/accounts/iam
def extract_section_name(line):
    # Extract cluster name from line
    cluster_name = line.split()[3].split("=")[1]
    # Extract file path from line
    file_path = line.split()[4]
    # Remove the leading paths up until "platforms/"
    file_path = file_path[file_path.find("platforms/") :]

    return f"{cluster_name} {file_path}"


# Extract goal value from log line. Example line:
#   gc 1 @0.005s 2%: 0.029+0.54+0.026 ms clock, 0.35+0/0.87/0.17+0.32 ms cpu, 3->3->1 MB, 4 MB goal, 0 MB stacks, 1 MB globals, 12 P
def extract_goal_value(line):
    # Extract goal value from line
    goal = int(line.split(",")[3].split()[0])

    return goal


largest_goals = {}

# Get the log files to parse as command line arguments. There could be 1 or more log files.
log_files = sys.argv[1:]

# Read log file line by line
for log_file in log_files:
    with open(log_file, "r") as file:
        for line in file:
            if line.startswith("time holos render"):
                section = extract_section_name(line)
                # Update largest goal for section if necessary
                if section not in largest_goals:
                    largest_goals[section] = 0
            if line.startswith("gc "):
                # Section is the most recent key added to largest_goals.
                section = list(largest_goals)[-1]
                goal = extract_goal_value(line)
                # Update largest goal for section if necessary
                if section not in largest_goals or goal > largest_goals[section]:
                    largest_goals[section] = goal

# Print the largest goal for each section
for section, largest_goal in largest_goals.items():
    cluster, path = section.split()
    print(f"{largest_goal} {cluster} {path}")

jeffmccune added a commit that referenced this issue Apr 26, 2024
(#147) Add holos render --print-instances flag
@natemccurdy
Copy link
Contributor

natemccurdy commented Apr 27, 2024

Problem commits

Using Git bisect, I found the following two commits to be the main causes of memory issues.
The criteria for identifying the bad commits are a render for a single instance that took more than 30s or had a GC goal >= 1000 MB.

  1. cffc430 increased memory usage for the provisioner cluster while rendering the provisioner/projects path.
  2. 9d1e77c increased memory usage for the foundation/cloud/mesh/ paths.

@natemccurdy
Copy link
Contributor

natemccurdy commented May 1, 2024

I wasn't able to make headway on improving memory usage in the Go or Cue code. I tried a few suggestions like removing unneeded let declarations, but nothing had a noticeable affect on GC goals.

I did update hack/render-all in the holos-infra repo so that running it doesn't consume more than ~2GB of memory per render in https://github.com/holos-run/holos-infra/commit/8aefcb12c31bc034d6a0aa15ecb341743993a3e6 . This uses the new --print-instances flag so that each holos render executed by xargs is smaller and uses less memory than collecting an entire cluster's platform into one render.

I'm going to stop here as this is good enough for now.


While researching this, I found that Cuelang has a lot of open issues about performance issues and memory leaks, and that this is an active area of development and interest by the Cue developers. It seems like future versions of Cue might end up fixing memory problems for us, so I recommend we try new Alpha and Beta versions of Cue as they are released.

  1. Umbrella issue for performance: Performance cue-lang/cue#2850
  2. The new Cue evaluator: Performance cue-lang/cue#2850 (comment)
    • With Cue v0.9.0-alpha.2 and higher, we can set CUE_EXPERIMENT=evalv3 to use the new, more performant, evaluator.
  3. Performance in CUE cue-lang/cue#2857
  4. evaluator: enable re-use buffers cue-lang/cue#2887
  5. Performance: closedness algorithm consuming lots of memory cue-lang/cue#2853
  6. cue: json encoding can use O(n^2) space cue-lang/cue#2470
  7. Memory Leak in load.Instances cue-lang/cue#2121

Another interesting possible follow-up to this issue is looking at Unity, Cue's automated performance and regression testing framework:

@jeffmccune
Copy link
Contributor Author

Closed via #179 and #183

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants