Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zls gets very slow on large files #1046

Closed
Jarred-Sumner opened this issue Mar 10, 2023 · 9 comments
Closed

zls gets very slow on large files #1046

Jarred-Sumner opened this issue Mar 10, 2023 · 9 comments
Assignees
Labels
bug Something isn't working

Comments

@Jarred-Sumner
Copy link
Contributor

Zig Version

0.11.0-dev.1863+a63134a4a

Zig Language Server Version

latest

Steps to Reproduce

  1. Have a large file (such as https://github.com/oven-sh/bun/blob/main/src/js_parser.zig#L15715)
  2. Start typing inside the clause of a scope-creating statement, like if (start.typing.here)

Expected Behavior

It should perform about as well as it does on small files.

Actual Behavior

Here is a 13 second gif. About 5 seconds for completions to appear when the expression changes

slow

@Jarred-Sumner Jarred-Sumner added the bug Something isn't working label Mar 10, 2023
@Techatrix Techatrix self-assigned this Mar 10, 2023
@Techatrix
Copy link
Member

Did you use a Release build of ZLS and Zig, because these delay are very similar to what I would get when using a debug build.
That being said, from my initial testing semanticTokens documentSymbol and didChange are some of the largest contributors to these delays. If anything else is frequently causing these large delays, please inform me.
I plan on optimizing and rewriting document symbol generation to bring its performance to a more reasonable level. The did change request naturally scales with the size of the document and semantic token generation is mostly slow because the underlying analysis backend is slow (especially on large files). Optimizing that is possible, but will take some more effort and time.
But I do have an idea on how to make semantic tokens way faster (but more inaccurate) by skipping expensive calls into the analysis backend.
In short, I know performance is an issue and I plan addressing it.

@Jarred-Sumner
Copy link
Contributor Author

Jarred-Sumner commented Mar 12, 2023

This uses a release build and I verified that both with the auto-updater and then compiling with ReleaseFast manually. No meaningful difference yet on this specific case (foo.bar) after #1050

@llogick
Copy link
Contributor

llogick commented Mar 12, 2023

The "Optimize document symbols" PR is indeed a massive W (although Techatrix must have a really fast PC to get single digit timing :)) making semanticTokens the new target for optimizations.. however, there's an overlooked one:

diff --git a/build.zig b/build.zig
index d0655bf..708641d 100644
--- a/build.zig
+++ b/build.zig
@@ -22,7 +22,7 @@ pub fn build(b: *std.build.Builder) !void {
         .target = target,
         .optimize = optimize,
     });
-
+    exe.linkLibC();
     const exe_options = b.addOptions();
     exe.addOptions("build_options", exe_options);
 
diff --git a/src/main.zig b/src/main.zig
index 19cc527..4991541 100644
--- a/src/main.zig
+++ b/src/main.zig
@@ -350,7 +350,7 @@ pub fn main() !void {
     const inner_allocator: std.mem.Allocator = if (tracy.enable_allocation) tracy_state.allocator() else gpa_state.allocator();
 
     var failing_allocator_state = if (build_options.enable_failing_allocator) debug.FailingAllocator.init(inner_allocator, build_options.enable_failing_allocator_likelihood) else void{};
-    const allocator: std.mem.Allocator = if (build_options.enable_failing_allocator) failing_allocator_state.allocator() else inner_allocator;
+    const allocator: std.mem.Allocator = if (build_options.enable_failing_allocator) failing_allocator_state.allocator() else if (zig_builtin.mode != .Debug) std.heap.c_allocator else inner_allocator;
 
     const result = try parseArgs(allocator);
     defer if (result.config_path) |path| allocator.free(path);

@Techatrix
Copy link
Member

The "Optimize document symbols" PR is indeed a massive W (although Techatrix must have a really fast PC to get single digit timing :))

I've tested document symbol performance by running it in a loop to avoids some overhead which keeps the cache hot and explains the low timing. I've tested on an i7-4770 DDR3-1600, definitely not a fast PC my modern standards :(

making semanticTokens the new target for optimizations.. however, there's an overlooked one:

I wonder how much this will help, whenever I benchmark semantic tokens, the GeneralPurposeAllocator is always one of the top contenders.

@Techatrix
Copy link
Member

@Jarred-Sumner does #1062 help?

@Jarred-Sumner
Copy link
Contributor Author

@Techatrix yes! Its much better.

@ViktorTrojan
Copy link

Im using zls 0.13.0, released Jun 9. When importing the byte array I can hear my pc fans speed up and it takes long. Also a visual bug happened after that

2024-12-03.19-01-30.mp4

@Techatrix
Copy link
Member

How large is the file?

@ViktorTrojan
Copy link

How large is the file?

40K lines

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants