Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance: UtilityAnalyzerBase calls GetSemanticModel for each SyntaxTree #8412

Merged

Conversation

martin-strecker-sonarsource
Copy link
Contributor

@martin-strecker-sonarsource martin-strecker-sonarsource commented Nov 27, 2023

Fixes #6558
Fixes #7368

See #7368 (comment) for the acceptance criteria.

@martin-strecker-sonarsource martin-strecker-sonarsource force-pushed the Martin/UtilityAnalyzer_Registration2 branch from ef9a621 to b3801a6 Compare November 27, 2023 12:46
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

100.0% 100.0% Coverage
0.0% 0.0% Duplication

@martin-strecker-sonarsource
Copy link
Contributor Author

martin-strecker-sonarsource commented Nov 27, 2023

Overall runtime

RavenDB (baae94d)

Before

00:02:11.92
00:02:12.49
00:02:36.78

After
00:02:23.52
00:02:14.31
00:02:13.32

Akka

Before (master c8facab)

00:01:24.06
00:01:29.60
00:01:32.86

After

00:01:14.65
00:01:17.13
00:01:25.65

Akka - Hotspts

The hotspots are distributed very differently between master and this PR for akka (red: functionality affected by this PR, blue: others not affected)):
image

  • Utility Analyzer Initialize went from first place with 1,3% down to 4th place with 0,2% (down by 84%).
  • SymbolReference analyzer went down from 0,6% to 0,1% (83%)
  • TokenType analyzer went down from 0,4% to 0,06% (85%)

Other analyzers are either stable or performing about 40% worse than before

  • WriteUcfg went up from 0,5% to 0,7% (+40%), or in total terms 32,8 up to 46,6 (+42%).
  • XUnit stayed stable at 0,5% (0% change), also in total terms unchanged by 33,9 sec
  • NodeActions stayed stable at 0,2% (down from 12,1 sec to 11,7 sec)
  • ShouldAnaqlyzeTree stayed stable at 0,1% or slightly down from 10 sec to 9,3 sec)
  • SymbolicExecution went up from 0,1% to 0,2% (+100%) (7,5 sec to 11,6 or +50% in absolute terms). There is likely a rounding error in the relative terms.

Overall the time savings seem to overcompensate any losses in other analyzers.

Akka - Lock contention and concurrency

Master:
image
PR:
image

Lock contention went up from 638s to 967s.
Also, the flame graph with the lock contention changed, but both are blocking on AnalyzerExecuter.ExecuteSyntaxNodeAction.

In user code, the picture is different:
image

The lock contention of 2,5sec on <RegisterNodeAction> went down to 0,4sec. CreateUnchangedFilesHashSet went down from 1,7sec to 0,4sec and IsGenerated down from 0,7sec to almost 0 sec:
image

ShouldAnalyzeTree() shows similar results:
image

Copy link
Contributor

@sebastien-marichal sebastien-marichal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@martin-strecker-sonarsource martin-strecker-sonarsource merged commit f39ae21 into master Nov 30, 2023
27 checks passed
@martin-strecker-sonarsource martin-strecker-sonarsource deleted the Martin/UtilityAnalyzer_Registration2 branch November 30, 2023 09:31
@andrei-epure-sonarsource
Copy link
Contributor

@martin-strecker-sonarsource it would be interesting to see before and after on bigger projects.

@martin-strecker-sonarsource
Copy link
Contributor Author

@costin-zaharia-sonarsource and I agreed on acceptance criteria previously. They are described here:
#7368 (comment)
I think we meet those.
We also do a performance comparison for a bigger set of projects for the whole sprint. The results can be found here (internal link).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants