Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JaroWinkler string distance algorithm allocates 1.11 GB and spends 4.8% of CPU time generating suggestions that are rarely used #6044

Closed
cartermp opened this issue Dec 20, 2018 · 16 comments
Milestone

Comments

@cartermp
Copy link
Contributor

This was found by:

  • Using VF# solution in VS 2019 Preview
  • Working in service.fs
  • Noticing memory usage was high
  • Profiling with good 'ole PerfView
  • Typing a bit, invoking some features, etc.

GC Rollup
image

Using a 50 second sample of the full trace:

Allocations
image

System.Char[] allocations
image

CPU time
image

There are two problems at play here:

  1. We send all identifiers the compiler knows about through this algorithm when generating a set of suggested names when there is a compile error
  2. The routine itself allocates a lot of char[]s and generally does a lot of work: https://github.com/Microsoft/visualfsharp/blob/master/src/utils/EditDistance.fs#L29

Taking a brief look, it doesn't seem trivial to make this faster or allocate less. It's also not clear how we could generate useful suggestions for people if we cannot scan all known identifiers.

That said, we should find something to do here. It comes as a result of typing in the IDE. Because this is done a lot (duh!), especially for slower typers, improving this would be useful.

@cartermp cartermp added this to the 16.0 milestone Dec 20, 2018
@TIHan
Copy link
Contributor

TIHan commented Dec 20, 2018

// cc @forki @Rickasaurus since you all may know more about how EditDistance works.

Based on this issue, I will try to make a benchmark on FilterPredictions.

@cartermp
Copy link
Contributor Author

The root of this is two competing use cases:

  • Using the IDE, typing stuff, and not being finished with what I'm typing
  • Using the command-line compiler and building to see what the errors are

In both cases we churn through all known symbols and generate suggestions, but in the former case, this work is not useful and evidently causes the IDE to do a lot of work when it shouldn't. However, the latter case is useful, and the suggestions we get from this feature are also nice.

A quick fix would be to not generate these suggestions. However, that regresses the second scenario. What we actually need is a way to understand when we're in one of the following contexts:

  • Doing a command-line build
  • Invoking a quick fix on an error (to get the suggestions)

In this case, we want to do this work. However, routine typing in the editor should not churn through all identifier strings and calculate distances.

@cartermp
Copy link
Contributor Author

Full sample:

Callers tree on System.String:

image

215 MB string allocations just from String.Concat.

Callee tree on FilterPredictions:

image

1.11 GB of allocations, which is 10.9% of total allocations in the sample.

We need to put this feature behind a flag until we come up with something better. This is an insane amount of allocations.

@cartermp cartermp changed the title JaroWinkler string distance algorithm allocates a lot and spends a lot of CPU time generating suggestions JaroWinkler string distance algorithm allocates 1.11 GBand spends a lot of CPU time generating suggestions that are rarely used Dec 21, 2018
@cartermp
Copy link
Contributor Author

Potential quick change:

  • Define a flag that effectively means, "I am in an editor with F# tooling"
  • Have that flag wired to an off-by-default setting in VS at the language service integration layer (other editors can also wire up their own)
  • Have the flag set to true by default in the compiler so dotnet build isn't regressed

Unfortunately, this also means the code fix is off by default. We are not architecturally set up to have it be on for the code fix, since doing so would mean that it would also be turned on any time you're typing an get an error. We'd need a larger architectural change (e.g., moving this and related work out of process) to have regular typing not incur this work, but a lightbulb fix go and do the calculation.

@cartermp cartermp changed the title JaroWinkler string distance algorithm allocates 1.11 GBand spends a lot of CPU time generating suggestions that are rarely used JaroWinkler string distance algorithm allocates 1.11 GB and spends a lot of CPU time generating suggestions that are rarely used Dec 21, 2018
@cartermp cartermp changed the title JaroWinkler string distance algorithm allocates 1.11 GB and spends a lot of CPU time generating suggestions that are rarely used JaroWinkler string distance algorithm allocates 1.11 GB and spends 4.8% of CPU time generating suggestions that are rarely used Dec 21, 2018
@forki
Copy link
Contributor

forki commented Dec 21, 2018 via email

@cartermp
Copy link
Contributor Author

Yes, this is with typing. If you do it slowly enough, or at least such that there are some compile errors due to not being "done", it will trigger. I noticed this while goofing around a bit with caches in service.fs. There's no concrete repro beyond noticing the slowdown and gathering a trace. Just the nature of lots of these IDE perf problems 🙂

What's nasty here is that in only 90 seconds, we already triggered a 4 second gen 2 collection. That's huge, especially considering it's in a single file. I'm not sure how to track if anything from this made it into gen2, but when something is representing nearly 11% of memory pressure it's very significant. We didn't see this before because other massive allocations from continually allocating strings that represent the source file were the hot spot, but with #6001 fixing that, we're now seeing this as a hot spot in the scenario I described.

@forki
Copy link
Contributor

forki commented Dec 21, 2018 via email

forki added a commit to forki/visualfsharp that referenced this issue Dec 21, 2018
@forki
Copy link
Contributor

forki commented Dec 21, 2018

there is also another idea in #6049 - we don't need to call this whole thing on small identifiers. this may be a lot during typing.

@cartermp
Copy link
Contributor Author

cartermp commented Dec 21, 2018

Yes, finding a way to not turn it off is definitely ideal. If we can't get something in place I'll likely advocate that we turn it off in VS by default. I do think it's a bit less severe in VS because completion will generally work, and the message in the error list window is a bit hard to read, making it something people may lean a little less on. But that would mean sacrificing the nicer message in the tooltip and the code fix suggestion, which are obviously benefits.

The allocation and CPU hotspot is in commonChars. In terms of exclusive CPU time, commonChars and the function exists that it calls are the bulk of CPU time in this routine. So if there's a way to reduce those down, that's a big impact. But as you suggested, limiting the number of strings that go in is also important.

forki added a commit to forki/visualfsharp that referenced this issue Dec 21, 2018
@cartermp
Copy link
Contributor Author

BTW thanks for engaging on this already - let's figure out the best way forward here 🙂

@forki
Copy link
Contributor

forki commented Dec 21, 2018

question: the current implementation is using a resizearray which I assume allocates on the heap. We only need the info locally in the jaro function. Would changing it to char [] help?

@abelbraaksma
Copy link
Contributor

Would changing it to char [] help?

@forki, even though char [] is a fixed array of value types, any array will allocate on the heap. In fact, the previous issues with allocating string, that @cartermp mentioned, part of that issue was caused by massive char[] if I remember correctly.

That is not to say that ResizeArray cn be a cause of trouble, because it does extra heap allocations. If this can be replaced with a single allocation, or at least less resizing, this may help. But any array will end up on the heap as far as I know.

@forki
Copy link
Contributor

forki commented Dec 21, 2018

@AviAvni was able to remove the array completely in #6050

@forki
Copy link
Contributor

forki commented Dec 21, 2018

@cartermp please try to merge #6049 and #6050 and retry your repro

@cartermp
Copy link
Contributor Author

cartermp commented Dec 21, 2018

Just to confirm that this should definitely be impactful. I took another trace with CompletionProvider.fs open, and VS not under tremendous load, and typing a new function. I deliberately made things not compile to simulate someone who's still sort of figuring out how to piece different components together. Results:

Memory usage:
image

CPU time:
image

If benchmarking yields improvements in memory/CPU time I think we can definitely consider #6050 as having solved the problem and close this issue.

I still want to take a look at #6049, since it can basically keep the routine from running at all in a number of scenarios. We'll have to think that one through a bit, since there are definitely cases where you'd want a suggestion when an identifier is less than 5 chars long. Perhaps we can take the average length of all identifiers and use that as a threshold?

@forki
Copy link
Contributor

forki commented Dec 22, 2018

Let's definitely do #6049 but I will set threshold to 4. It just doesn't really make sense below that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants