Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Add RFC for improving Vrl performance #9812

Closed
wants to merge 7 commits into from

Conversation

StephenWakely
Copy link
Contributor

@StephenWakely StephenWakely commented Oct 27, 2021

Ref #9811

Readable version

Signed-off-by: Stephen Wakely fungus.humungus@gmail.com

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
@netlify
Copy link

netlify bot commented Oct 27, 2021

✔️ Deploy Preview for vector-project canceled.

🔨 Explore the source changes: 637bfca

🔍 Inspect the deploy log: https://app.netlify.com/sites/vector-project/deploys/618e466fc649540008df8220

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
@StephenWakely StephenWakely marked this pull request as ready for review October 28, 2021 15:06
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Copy link
Contributor

@spencergilbert spencergilbert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the plan after this PR is to spike both of the "large" options?

rfcs/2021-10-14-9811-vrl-performance.md Outdated Show resolved Hide resolved
@StephenWakely
Copy link
Contributor Author

So the plan after this PR is to spike both of the "large" options?

Yeah. I have a rough spike for refcounts here. #9785
Working on the Vm one now.

Comment on lines 274 to 275
A bump allocator will allocated a significant amount of memory up front. This
memory will then be used
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't have to be significant. The main advantage of this kind of allocation pattern is you pay for one largish block up front and then pay for one dealloc, downsides being you have to be sure you have enough memory allocated for your context or have some mechanism in place to daisy-chain allocated blocks. Daisy-chaining sort of defeats the purpose.

significantly cheaper. When the value goes out of scope the count is reduced.
If the count reaches 0, the memory is freed.

VRL needs to wrap the data in an `Rc`. This does mean pulling the data out of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why an Rc and not a Cow?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is because it needs to be owned in multiple places, and Cow is more accurately "copy-on-own" (which can then "write" to the owned copy). See for example assignment:

Single { target, expr } => {
let value = expr.resolve(ctx)?;
target.insert(value.clone(), ctx);
value
}

That value needs to be retained in the source, inserted into the target, and returned, requiring a clone (or even two, with one implicit in expr.resolve).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would the advantages of a Cow be? I can think mutation would be safer since it copies rather than panics. Anything else?

@bruceg is on the mark, ownership gets complicated. Ultimately the event store should own all the data, but the data is not always created there, it is often created in an expression and then added to the event store - sometimes whilst also passing it somewhere else.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there's no advantages but this might be worthwhile spelling out in an alternatives section.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the advantage of a Cow would be to avoid the heap allocation and reference count machinations, reducing indirection and increasing performance, since the only time it would get cloned is when it actually has to. The only way I could see to make this work is to pass in the return buffer as a mut parameter somehow instead of returning it, but I have no idea what other issues that would raise.

runtime panics. Fortunately a lot of Vrl relies on creating new values
rather than mutating existing, so there isn't too much code that will be
affected.
6. `Value` is no longer `Send + Sync`. There are some nodes in the AST that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW I like dropping Send + Sync from remap. It pins it as single-threaded -- good -- and allows us to do extra tricks at the topology level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ultimate reason Vrl needs Send + Sync is because it needs to store the Program in the transform and the FunctionTransform trait needs Send + Sync. Is there a way around that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, sorry, I meant to say I was pro the removal of Send + Sync from Value and not remap generally.

significantly cheaper. When the value goes out of scope the count is reduced.
If the count reaches 0, the memory is freed.

VRL needs to wrap the data in an `Rc`. This does mean pulling the data out of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is because it needs to be owned in multiple places, and Cow is more accurately "copy-on-own" (which can then "write" to the owned copy). See for example assignment:

Single { target, expr } => {
let value = expr.resolve(ctx)?;
target.insert(value.clone(), ctx);
value
}

That value needs to be retained in the source, inserted into the target, and returned, requiring a clone (or even two, with one implicit in expr.resolve).

rfcs/2021-10-14-9811-vrl-performance.md Outdated Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
Comment on lines 221 to 224
With each node of the AST compiled down to just a few bytes and all
instructions held in contiguous memory evaluation of the program should be able
to take full advantage of the CPU cache which should result in much faster
execution.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The downside is that this trades cache efficiency for branch prediction inefficiency. It may well be a win, but it's not free.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I fully understand. How so?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When traversing the current tree representation, the jump instructions will be in a number of different places in code. Each jump point will have a separate branch prediction buffer in the CPU. This separation allows the CPU to predict the branches more often, since each jump will tend to go to only a predictable number of other locations. With byte code, there will be an opcode dispatch that becomes effectively unpredictable due to data dependencies.

I may, of course, be missing how many jumps are involved in traversing the tree, and this may well come out to be a win. I just wanted to point out a potential cost that went along with the cache efficiency win.

rfcs/2021-10-14-9811-vrl-performance.md Outdated Show resolved Hide resolved
Comment on lines 286 to 291
With the code as a single dimension array of Bytecode, it could be possible to
scan the code for patterns and reorganise the Bytecode so it can run in a more
optimal way.

A lot more thought and research needs to go into this before we can consider
implementing these changes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another possibility is handing off the execution to a JIT engine (eBPF anyone?).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be fun to experiment with!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related to another idea I've been mulling over. We currently try to avoid FFI costs in VRL by having it execute directly against an event in memory (via a trait), but still end up doing a decent amount of copying and moving things around since the models aren't an exact match. An alternative approach could be to model access to the event more like IO and have VRL execution itself work in a more "pure" context (i.e. provide it inputs, take resulting mutations as outputs). Without that need for tight data-level integration, it'd be simpler to hand off VRL execution to a general-purpose VM/JIT (e.g. wasm, luajit, etc).

That could also be total nonsense, since my knowledge of current VRL execution is quite poor 😄

rfcs/2021-10-14-9811-vrl-performance.md Outdated Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
rfcs/2021-10-14-9811-vrl-performance.md Show resolved Hide resolved
Comment on lines 289 to 291
With the code as a single dimension array of Bytecode, it could be possible to
scan the code for patterns and reorganise the Bytecode so it can run in a more
optimal way.
Copy link
Contributor

@JeanMertz JeanMertz Nov 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This optimization can also be applied to the existing runtime. That is, say we do .foo = .baz, the assignment expression (... = ...) returns the rhs value assigned to the lhs target. This is an extra operation that is sometimes used, but mostly never. So instead, we could detect when the return value is discarded, and (e.g.) swap it out for a literal value null to avoid any cloning.

To achieve the above easily, we could start by checking if the assignment expression is both (1) a root expression, and (2) not the last expression in the program. If both are true, then we can avoid cloning/returning the rhs value, and instead return null in its place. This doesn't cover all cases (e.g. foo = .bar = "baz", where foo is never used), but it covers the most common case.

There are many other optimizations like these we can do.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into doing this kind of pattern-based optimization briefly a while back and (with the caveat that I was very new to VRL internals) I had a hard time finding a good place to do it. IIRC, there were a few more layers of indirection around the AST representation than would have been conducive to expressing those patterns in a simple way.

It's possible that I just didn't know the code well enough or that things have changed in the meantime, but even so it seems like it would be a valuable goal to have a simple, relatively tight representation of the AST available that makes experimenting with these kinds of optimizations as easy as possible.

Comment on lines +323 to +324
- Using `RefCell` does move the borrow checking to the runtime. Without compile
time checks the chances of a panic are much higher.
Copy link
Contributor

@JeanMertz JeanMertz Nov 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to give some examples on what we should watch out for.

I would probably favour having a wrapper type that only exposes try_borrow, and having the error implement ExpressionError, so that we avoid panicking Vector in production.

Thinking about this, given that VRL functions return new values instead of mutated ones, do we have a list of places we'd need to borrow_mut? If those are minimally required, we could keep that API internal to the compiler, to avoid exposing it to functions and thus significantly reducing the risk of errors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are not many places where we do need to borrow_mut - mostly in the Target implementation. I think any panics would be indicative of a bug, so I think we should probably fail hard and make sure the bugs aren't there by testing rigorously.

Comment on lines 328 to 339
- The code is much more complex. With an AST that we walk it is fairly apparent
what the code will be doing at any point. With a Vm, this is not the case, it
is harder to look at the instructions in the Vm and follow back to what part
of the Vrl code is being evaluated. We will need to write some extensive
debugging tools to allow for decent introspection into the Vm.
- We lose a lot of safety that we get from the Rust compiler. There will need
to be significant fuzz testing to ensure that the code runs correctly under
all circumstances.
- Currently each stdlib function is responsible for evaluating their own
parameters. This allows parameters to be lazily evaluated. Most likely with a
Vm, the parameters will need to be evaluated up front and the stack passed
into the function. This could impact performance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I'd probably favour pulling the Bytecode VM suggestion out into a separate RFC. I'm not comfortable merging this RFC (which, I suppose, would be a blessing that we're going to implement the VM at some point), without further discussion and/or investigating alternatives.

There are many things we can already do, and there's at least one big one not part of this RFC (concurrently running VRL runtimes) that I suspect can get us a long, long way, before we need to consider the VM approach.

Comment on lines 356 to 358
If we don't do any, Vrl will continue to be a bottleneck. It is also possible
that Vrl will continue to be a bottleneck after these changes, but hopefully
just not as significant.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully agree with this. I'm still curious how @bruceg's work of processing events in batches is going to help out.

Right now, VRL is single-threaded, and the remap transform handles one event at a time. If we are allowed to handle events in batches, we can spin 20 VRL runtimes simultaneously to process 20 events at a time, theoretically getting a 20-fold performance improvement (in practice, it'll be less, but it'll still be a lot).

I've got this nagging feeling in the back of my mind that I'm not entirely convinced we're at a point where we need to invest in any of the bigger items in this RFC, if batch-processing is close to being possible.

I agree, all the items in this RFC will likely contribute to an improved performance of VRL, but some of them also have significant complexity downsides, so it's not entirely clear to me right now if they are worth the investment, or if we should wait for batch processing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what it's worth pipelining VRL runtimes is something we can already do today and would be worth exploring. Consider that we already have ready_chunks in the topology. Each batch -- whether from Bruce's work or pipelining over chunks -- will be limited by the slowest VRL computation, so single-threaded performance still does matter. That's maybe different if we go full-bore streaming and process in parallel, our of order but we don't have plans for that.

Still, it should be cheap-ish to fiddle with pipelining without waiting for Bruce's work to land.

Copy link
Contributor

@JeanMertz JeanMertz Nov 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what it's worth pipelining VRL runtimes is something we can already do today and would be worth exploring. Consider that we already have ready_chunks in the topology.

Interesting! Are you suggesting we experiment by adding a new Transform::BatchedFunction transform type, that works similar to Function, except that it doesn't iterate over the chunk of events, but passes them in all at once?

How close is this semantically to the work @bruceg is doing? I'd definitely favour starting up an experiment to benchmark such a change and measure the increase in performance.

Each batch -- whether from Bruce's work or pipelining over chunks -- will be limited by the slowest VRL computation, so single-threaded performance still does matter.

I fully agree. It's a matter of balancing the required time investment, combined with the increase in implementation complexity, vs. the expected gains. What I'm suggesting is that it might be worth holding off on these changes until VRL is a bit more stable (maybe after 1.0, assuming none of these compiler changes will introduce user-facing breaking changes), if we can get "enough" of a performance boost by processing events in batches, to satisfy all existing use-cases of customers.

I'm suggesting this assuming that the batched processing will give us an x-fold performance increase, whereas any of the changes suggested here will be in the 10-30% single-threaded performance increase. This assumption might be inaccurate, though.

Copy link
Member

@lukesteensen lukesteensen Nov 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related but distinct from the work Bruce is doing. His work is focused on amortizing the internal communication cost of moving events through the topology. What we're talking about here is concurrency within one logical component.

Are you suggesting we experiment by adding a new Transform::BatchedFunction transform type, that works similar to Function, except that it doesn't iterate over the chunk of events, but passes them in all at once?

This is also related but (imo) not the same. Unless VRL has a way to truly operate on multiple events simultaneously, then I don't see any benefit in passing it a batch of events at a time.

Instead, what I think we're discussing is the possibility that the topology could take batches of events and spawn a new task on which it would run them through a VRL runtime, and then collect the results and forward them normally. This would allow us to spawn multiple work tasks concurrently within a single remap transform, with no changes to VRL itself.

I definitely agree this is something we should rig up and try, especially since it would be very cheap to do so. The main question is how well it pays off and if there are cases where the overhead of spawning and collecting leads to degradation. Even if we need to add branching to only spawn when needed, I'm optimistic it could provide some serious gains and allow us to lean heavily on tokio's existing work-stealing scheduler instead of implementing some additional concurrency mechanism.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Luke hits the nail on the head I think. It might be that we can pipeline a bunch of parallel VRL instances and operate over a chunk -- however that's originated into the system -- and it might be that VRL is better off truly operating on multiple events at once itself. There are options here and we need to play with them, be ready with an experimental rig to understand their behavior. So, if making a BatchedFunction is a cheap spike, give it a shot. This is, I think, also weight against dropping a bunch of time on a new VM if we see other optimization opportunities available to us.

That said, I'd like to push back strongly on the notion that we should hold off on VRL improvements until other streams of work pan out. Today in our experiments VRL is worth -60Mb/s of throughput for datadog_agent -> remap -> datadog_log. That's representative and is a deep hole to work out of. It's strongly correlated to VRL's poor single-threaded performance. It's true that we do have other streams of work that are related to improving vector throughput. This RFC represents another stream of work, one that cooperates with the others and gives us some wins even if the others don't pan out in the time we'd like them to. So, we know VRL is a performance problem. Rather than discuss whether we should tackle that problem now in this PR let's discuss how to go about tackling it. We should be interested in finding an ordering of work that land cheap wins up front, delay harder work until we know more about what we have found in the cheap work or have a clearer idea about where our customer pain points are.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of what you said sounds reasonable to me 👍.

I'm thrilled to see this RFC move forward, but I've yet to be convinced that taking on the VM-related work is a worthwhile investment at this point (not because I don't believe it'll give us gains, but because of the required investment, and the added complexity to the project). I'm not sure what can convince me otherwise, aside from all the other proposed tasks not resulting in any significant gains, or turning out to be too complex in scope (both unlikely).

I'm also eager to see if there's any early spike work we can push to have the remap transform act concurrently on batches of events, it might be out of scope for this RFC, but I'd rather see that work start sooner than later, if nothing else, to prove out potential gains, and weigh its effectiveness relating to the work described in this RFC.

Given the concurrency work is being picked up by others, we could either collaborate on the spike, or slap something together separate from the actual production-ready work, to get those early numbers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of what you said sounds reasonable to me +1.

I'm thrilled to see this RFC move forward, but I've yet to be convinced that taking on the VM-related work is a worthwhile investment at this point (not because I don't believe it'll give us gains, but because of the required investment, and the added complexity to the project). I'm not sure what can convince me otherwise, aside from all the other proposed tasks not resulting in any significant gains, or turning out to be too complex in scope (both unlikely).

FWIW I'm not arguing on this point one way or the other since it's well outside my domain of experience. If folks in the know feel like a VM is not a big lift I'm game, if folks feel like it is I'm glad to see us tackle it later.

I'm also eager to see if there's any early spike work we can push to have the remap transform act concurrently on batches of events, it might be out of scope for this RFC, but I'd rather see that work start sooner than later, if nothing else, to prove out potential gains, and weigh its effectiveness relating to the work described in this RFC.

Given the concurrency work is being picked up by others, we could either collaborate on the spike, or slap something together separate from the actual production-ready work, to get those early numbers.

I like it. This falls into my wheelhouse some, so if we make an issue to do the spike I'll take it on. I'm curious anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wanted to give some long term context. I'll let you all weigh the benefits/costs, but we eventually want to get to a place where we can push 100mb/s/core with a simple parsing VRL script. Therefore, we will inevitably need to solve single core performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good context.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So my idea for the focus of this RFC is very much about improving the performance of the VRL runtime - ie the single core performance. Parallelising the transform is I'm sure going to provide other huge wins, but I imagine all the work there is occuring at the topology level and would apply to any transform, not specifically the Remap one.

I'm not suggesting that we have to do all these things straight away, but the RFC is just listing out things we can do to improve the performance. We may choose to do reference counting now, then topology changes and see where that leaves us before moving onto the riskier VM.

But if our goal for single core performance is to get us from 60mb/s -> 100mb/s I think we are going to have to implement the VM, reference counting alone won't cut it. Initial benchmarks with the basic VM indicate that that will get us there.

@bits-bot
Copy link

bits-bot commented Nov 3, 2021

CLA assistant check
All committers have signed the CLA.

@tobz tobz added the RFC label Nov 4, 2021
@binarylogic
Copy link
Contributor

@StephenWakely can we schedule a last call for this RFC? It seems like we're at the point where a single 30 minute discussion would get this over the finish line.

Comment on lines +129 to +134
5. Update everything to use `Rc<RefCell<Value>>`. There is a risk here since
using `RefCell` moves the borrow checking to the runtime rather than compile
time. Affected areas of code will need to be very vigorously tested to avoid
runtime panics. Fortunately a lot of Vrl relies on creating new values
rather than mutating existing, so there isn't too much code that will be
affected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fact that we do very little actual mutation of individual values seems very important. Would it be feasible to eliminate it (and the RefCell) entirely? The fact that we're currently cloning those values implies that we're already not sharing mutations, unless I'm misunderstanding something.

A related idea (maybe off the mark, but wanted to mention) would be to keep the actual values in our "working set" in a single place (maybe just a HashMap, maybe something fancier like a slotmap) and pass around Copy indexes/keys into that structure. Then we can lazily read through to the actual event to fill it, replace values wholesale during execution (as opposed to mutation), and apply changes to the actual event at the end (similar to what you mentioned elsewhere).

Comment on lines 289 to 291
With the code as a single dimension array of Bytecode, it could be possible to
scan the code for patterns and reorganise the Bytecode so it can run in a more
optimal way.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into doing this kind of pattern-based optimization briefly a while back and (with the caveat that I was very new to VRL internals) I had a hard time finding a good place to do it. IIRC, there were a few more layers of indirection around the AST representation than would have been conducive to expressing those patterns in a simple way.

It's possible that I just didn't know the code well enough or that things have changed in the meantime, but even so it seems like it would be a valuable goal to have a simple, relatively tight representation of the AST available that makes experimenting with these kinds of optimizations as easy as possible.

Comment on lines 286 to 291
With the code as a single dimension array of Bytecode, it could be possible to
scan the code for patterns and reorganise the Bytecode so it can run in a more
optimal way.

A lot more thought and research needs to go into this before we can consider
implementing these changes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related to another idea I've been mulling over. We currently try to avoid FFI costs in VRL by having it execute directly against an event in memory (via a trait), but still end up doing a decent amount of copying and moving things around since the models aren't an exact match. An alternative approach could be to model access to the event more like IO and have VRL execution itself work in a more "pure" context (i.e. provide it inputs, take resulting mutations as outputs). Without that need for tight data-level integration, it'd be simpler to hand off VRL execution to a general-purpose VM/JIT (e.g. wasm, luajit, etc).

That could also be total nonsense, since my knowledge of current VRL execution is quite poor 😄

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
@fuchsnj
Copy link
Member

fuchsnj commented Nov 15, 2021

Have we considered spending our time making a transpiler to WASM, instead of writing a full VM? Then we can run it on something like the wasmer runtime and get free JIT and AOT compilation with full optimizations. The perf improvements suggested here are nice, but will never compare to what we could get using wasmer. (It can use LLVM for optimizations which will get us near-native performance)

@StephenWakely
Copy link
Contributor Author

Have we considered spending our time making a transpiler to WASM,

I have addressed this in #10011. Essentially the issue with WASM is that there is a serialization cost of marshaling the data into and out of WASM that adds a significant overhead that would cause problems.

@jszwedko
Copy link
Member

Have we considered spending our time making a transpiler to WASM,

I have addressed this in #10011. Essentially the issue with WASM is that there is a serialization cost of marshaling the data into and out of WASM that adds a significant overhead that would cause problems.

As Spencer brought up, we believe we could pass memory regions back and forth so the serialization overhead isn't a strict blocker. We also discussed that the VRL specific VM would be a step towards WASM later though, if we choose, so it seems like a good first step regardless.

@StephenWakely
Copy link
Contributor Author

This RFC heavily leans towards using Reference Counting to improve VRL performance. As #9785 has shown, this doesn't actualy provide any significant advantage.

The main performance gains thus far seem to come from implementing a runtime VM and using LLVM. Closing this in favour of #10011 and #10517.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.