- 
                Notifications
    
You must be signed in to change notification settings  - Fork 10.6k
 
An additional benchmark for KeyPath read performance. #61795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
          
     Merged
      
      
            BradLarson
  merged 21 commits into
  swiftlang:main
from
fibrechannelscsi:more-keypath-benchmarks
  
      
      
   
  Nov 7, 2022 
      
    
                
     Merged
            
            An additional benchmark for KeyPath read performance. #61795
                    BradLarson
  merged 21 commits into
  swiftlang:main
from
fibrechannelscsi:more-keypath-benchmarks
  
      
      
   
  Nov 7, 2022 
              
            Conversation
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
    … them above 20 us.
…. Removed benchmarks dealing with an inlining issue.
…p and documentation.
…enamed GetSet to Getset.
… long setup overhead errors.
…t() to try to reduce setup time errors.
…ed by non-trivially-typed memory.
| 
           @swift-ci please benchmark  | 
    
KeyPath.swift should now look like what is currently in main.
| 
           @swift-ci please benchmark  | 
    
The reported time was 1847 us.
| 
           @swift-ci please benchmark  | 
    
              
                    eeckstein
  
              
              approved these changes
              
                  
                    Nov 7, 2022 
                  
              
              
            
            
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
| 
           @swift-ci please test  | 
    
    
  valeriyvan 
      pushed a commit
        to valeriyvan/swift
      that referenced
      this pull request
    
      Nov 15, 2022 
    
    
      
  
    
      
    
  
* Added a benchmark for KeyPaths where trivially-typed memory is preceded by non-trivially-typed memory. * Reduces the workload of run_KeyPathClassStructs by a factor of 4. The reported time was 1847 us.
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
This adds one additional benchmark where a read is performed on a KeyPath whose root is a reference type. Subsequent elements encountered during a project-read operation are pure struct types.
The goal is to highlight an upcoming performance optimization where, during a read or write operation, we project through non-pure-struct types first, and then use a pre-computed offset to jump to the final value once we encounter a stretch of trivially-typed memory only. In essence, if we have the following nested structure (with S and C referring to struct and non-struct types, respectively):
C1 -> C2 -> ... -> Cm -> S1 -> S2 -> ..., -> Sn
we project normally from C1 to Cm, then as soon as we get to S1, we use the pre-computed offset to jump to Sn.
The upcoming optimization will not jump via an offset through intermediate stretches of trivially-typed memory. Namely, if we have the following situation:
C1 -> C2 -> ... -> Cm -> S1 -> S2 -> ..., -> Sn -> Cm+1 -> Cm+2 -> ... -> Cm+x -> Sn+1 -> Sn+2 -> ..., -> Sn+y
then only one offset will be involved, and the jump will only be performed from Sn+1 to Sn+y.
Relates to:
#60758