Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CORE] Performance improvements #132

Merged
merged 12 commits into from
Oct 19, 2023
Merged

[CORE] Performance improvements #132

merged 12 commits into from
Oct 19, 2023

Conversation

d0g0x01
Copy link
Contributor

@d0g0x01 d0g0x01 commented Oct 13, 2023

This PR implements a number of changes to optimize the creation of the graph and speed up ingest:

  1. use in memory graph backend
  2. tune graph to better optimize for writes
  3. optimize queries used to generate edges
  4. optimize K8s API querying

Total runtime for a cluster of 25k pods is 45mins -> 6mins. Graph creation time 35mins -> 30secs

It also fixes a number of minor bugs around telemetry and logging discovered during the performance testing.

@d0g0x01 d0g0x01 temporarily deployed to devenv October 13, 2023 14:20 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 temporarily deployed to devenv October 18, 2023 08:40 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 temporarily deployed to devenv October 18, 2023 09:36 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 temporarily deployed to devenv October 18, 2023 10:01 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 temporarily deployed to devenv October 18, 2023 13:59 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 marked this pull request as ready for review October 18, 2023 14:27
@d0g0x01 d0g0x01 requested a review from a team as a code owner October 18, 2023 14:27
@d0g0x01 d0g0x01 temporarily deployed to devenv October 18, 2023 15:18 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 changed the title Jeremy/experimental [CORE] Performance improvements Oct 18, 2023
Copy link
Contributor

@edznux-dd edznux-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Huge improvements!

Looks good to me, left a few notes / improvements that could be beneficial for end users imo.

And a few general things that may be worth checking in another PR:

  1. Can we try changing the setting: DisableCompression bool to false in the k8s client config?
    I'd expect it to be slower if we put it to true (disabling the compression) because the network latency would increase, but that would be interesting to test? (maybe even as a config 😅 with default values)
  2. According to this profile (one the "3rd minute", since it's a minute per minute, and I imagine that was during the mongo insertion "step")
    image
    We spent about half of the CPU time (which i understand, is not the most significant part of the wall time) in garbage collection, and a lot of time copying data to mongo.
    Is there a way we could pre allocate some of the buffers (and reuse that buffer instead of creating a new one for everything?)
  3. This profile (a bit earlier in the process, the k8s api fetching part)
    image
    shows that we Unmarshal() a lot (that would makes sense, we have GB of data coming through there on large cluster). But I don't think we need to process it, do we? We are just forwarding it to mongo as is?
    (I'm unsure, for example, if item, ok := obj.(*rbacv1.RoleBinding) convertion/type assertion makes an allocation because it's a "complex" type? And we don't really need it because we just forward the raw json to mongo in any case?)
    I mean: We do K8S json api => parse json => copy to object (via the StoreConverter) => encode bson => send to mongo. Could we (maybe not all case, some have processing steps in there) decode to the wanted type, with annotation for both bson and json and avoid a copy?
    It looks like we could save like 30-45 seconds maybe (both in GC time and time spent on unmarshalling) there?

configs/etc/kubehound-dd.yaml Show resolved Hide resolved
configs/etc/kubehound-dd.yaml Show resolved Hide resolved
pkg/kubehound/graph/edge/exploit_host_traverse_token.go Outdated Show resolved Hide resolved
@d0g0x01
Copy link
Contributor Author

d0g0x01 commented Oct 19, 2023

Nice! Huge improvements!

Looks good to me, left a few notes / improvements that could be beneficial for end users imo.

And a few general things that may be worth checking in another PR:

  1. Can we try changing the setting: DisableCompression bool to false in the k8s client config?
    I'd expect it to be slower if we put it to true (disabling the compression) because the network latency would increase, but that would be interesting to test? (maybe even as a config 😅 with default values)
  2. According to this profile (one the "3rd minute", since it's a minute per minute, and I imagine that was during the mongo insertion "step")
    image
    We spent about half of the CPU time (which i understand, is not the most significant part of the wall time) in garbage collection, and a lot of time copying data to mongo.
    Is there a way we could pre allocate some of the buffers (and reuse that buffer instead of creating a new one for everything?)
  3. This profile (a bit earlier in the process, the k8s api fetching part)
    image
    shows that we Unmarshal() a lot (that would makes sense, we have GB of data coming through there on large cluster). But I don't think we need to process it, do we? We are just forwarding it to mongo as is?
    (I'm unsure, for example, if item, ok := obj.(*rbacv1.RoleBinding) convertion/type assertion makes an allocation because it's a "complex" type? And we don't really need it because we just forward the raw json to mongo in any case?)
    I mean: We do K8S json api => parse json => copy to object (via the StoreConverter) => encode bson => send to mongo. Could we (maybe not all case, some have processing steps in there) decode to the wanted type, with annotation for both bson and json and avoid a copy?
    It looks like we could save like 30-45 seconds maybe (both in GC time and time spent on unmarshalling) there?

This is all good stuff and probably will make a difference. I think since you've identified some of these would be best for you to take this forward. No huge rush as we're more than good enough with the current implementation but nice to have when you can spare some time

@d0g0x01 d0g0x01 temporarily deployed to devenv October 19, 2023 10:05 — with GitHub Actions Inactive
@d0g0x01 d0g0x01 merged commit d5a8b38 into main Oct 19, 2023
3 checks passed
@d0g0x01 d0g0x01 deleted the jeremy/experimental branch October 19, 2023 10:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants