-
Notifications
You must be signed in to change notification settings - Fork 10.5k
[Benchmarks] Add a whole lot more benchmarks for Data #20396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@swift-ci please smoke test |
@swift-ci please smoke benchmark |
blackHole(Data([0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6])) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@milseman What other converters do we want to test here? Should we test the Foundation overlay variations as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Up to you. Next release, we'll want to unify them as part of some upcoming string spring cleaning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM — as long as CI is happy, we should merge.
Build comment file:Performance: -O
Code size: -O
Performance: -Osize
Code size: -Osize
Performance: -Onone
How to read the dataThe tables contain differences in performance which are larger than 8% and differences in code size which are larger than 1%.If you see any unexpected regressions, you should consider fixing the regressions before you merge the PR. Noise: Sometimes the performance results (not code size!) contain false alarms. Unexpected regressions which are marked with '(?)' are probably noise. If you see regressions which you cannot explain you can try to run the benchmarks again. If regressions still show up, please consult with the performance team (@eeckstein). Hardware Overview
|
Among other things, this has also reintroduced setup overhead to some of the benchmarks I previously cleaned up. All the new Also all the I'll clean this up, but in the future, please do run Benchmark Check Report
|
Large variants are useful in the regards that it takes a completely different code path and has different performance characteristics. I understand we should not do that for every test but it still should not regress imho. The arc4random is being used to ensure the kernel does not just hand out a non faulting page of memory. Just being an allocation is not enough to properly identify used cases. It must be written to: and patterns like repeated are a completely different code path in the sequence initialization. |
If we want to transition to a different random source that is fine in my book. It does not need to be arc4 but it should be something portable to Linux is all. |
Can you clarify this part?
To the best of my understanding, even the func sampleData(size: Int) -> Data {
var data = Data(count: size)
data.withUnsafeMutableBytes { getRandomBuf(baseAddress: $0, count: size) }
return data
} How is it different initialization pattern than this from IterateData? var data = Data(count: 16 * 1024)
let n = data.count
data.withUnsafeMutableBytes { (ptr: UnsafeMutablePointer<UInt8>) -> () in
for i in 0..<n {
ptr[i] = UInt8(i % 23)
}
} I think that it is fully equivalent from the perspective of |
Oh that differential is fine, I was just saying that Repeated (the sequence type) would behave differently. |
This adds a number of new benchmarks for Data that test specific common cases of empty, small, medium and large Datas. Additionally this adds new benchmarks for creation as well as conversion from String to Data using Sequence/Collection initializers.