-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UtilityAnalyzer: Roll back the reversal of #6576 and bring back RegisterSemanticModelAction #7368
Comments
This has proven to cause the regression by design. Instead, the other explored improvements (reducing model quering) should be done. We should close this as a dead end. |
Closing this ignores the fact that we do not re-use the semantic model but create a new one for each UtilityAnalyzer. We should have at least the option to re-use it and opt out only if we see fit. There might be reasons to allow any analyzer to operate on its own semantic model (SE I'm looking at you here), but it should be a balanced decision. At the moment it is:
That isn't well-balanced but more like a quick patch to fix the major problems caused by querying the semantic model for each identifier by the token type analyzer. RegisterSemanticModelAction is needed if we want to be able to do more fine-grained re-use of the semantic model as described in #4217. |
How do we measure the regression?
Merge acceptance criteria?
How to measure with dotnet trace
Consider also making the analyzer stateless: #7288 |
The projects that had a notable regression at analysis time (see also graphs here) and were used also later on for measurements are: ravendb: baae94d12fec8cbd7d4828f1ab3754eff2a1e74f The two analyzers that had a notable impact were the |
Once we have the need to register for semantic model, we can add the possibility to register for it any time in the future. Without using it for the utility analyzers where it didn't work. Data has proven that possibly creating new model is much faster than trying to use the nicer registration. At the same time, Roslyn might cache those for us. Or their creation is not as expensive as the thread starvation. |
@martin-strecker-sonarsource I'm missing an explanation why this was reopened. The evidence proves that this approach doesn't work. |
Both kinds of registrations do the same thing. I logged how the current registration uses semantic models in this branch https://github.com/SonarSource/sonar-dotnet/tree/Martin/UtilityPerf_LogSemModelCreation
It will be interesting to see how the other registration changes the picture here. It also reuses the SemanticModel, but I expect changes in the reuse of the semantic model and the order between rules and utility analyzers. I logged only a single rule usage of the semantic model, so we may miss some vital information. For the models that were not reused, it looks like so:
It looks like the semantic model cache is cleared, and a new one is created sometimes. This can happen to a SyntaxNode rule as well (see the last case). |
#6576 added support for
RegisterSemanticModelAction
. The Roslyn compiler pipeline exposes several entry points into the compilation pipeline. RegisterSemanticModelAction is one of them and currently missing. #6576 was rolled back by #7262 (reasoning can be found there). The part of #6576 that added the registration including the tests should be added back to the code base. The registration of the UtilityAnalyzer should be changed to use RegisterSemanticModelAction to re-use the semantic model but only after all UtilityAnalyzer are re-worked in a way to reduce the queries to the semantic model to the absolute minimum. If this is not possible, some UtilityAnalyzer (like e.g. the symref analyzer) might opt to create there own semantic model.The text was updated successfully, but these errors were encountered: