Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

go/mysql: performance optimizations in protocol encoding #16341

Merged
merged 1 commit into from
Jul 8, 2024

Conversation

mattrobenolt
Copy link
Contributor

@mattrobenolt mattrobenolt commented Jul 4, 2024

This employs a couple tricks that combined seemed fruitful:

  • Swapping to binary.LittleEndian.Put* on the basic calls gets us a free
    boost while removing code. The main win from this swap is the slice
    boundary check, resulting in a massive boost. I kept it inlined, but
    added my own boundary checking in writeLenEncInt since swapping it
    out here resulted in a very minor performance regression from the
    current results. I assume from the extra coersion needed to the uint*
    type, and another reslice.
  • Reslicing the byte slice early so all future operations work on
    0-index rather than pos+ indexing. This seemed to be a pretty sizeable
    win without needing to do more addition on every operation later to
    determine the index, they get swapped out for constants.
  • Read path employs the same early reslicing, but already has explicit
    bounds checks.
  • Rewrite writeZeroes to utilize the Go memclr optimization.
$ benchstat {old,new}.txt
goos: darwin
goarch: arm64
pkg: vitess.io/vitess/go/mysql
                                 │    old.txt     │               new.txt                │
                                 │     sec/op     │    sec/op     vs base                │
EncWriteInt/16-bit-10               0.4685n ±  0%   0.3516n ± 0%  -24.94% (p=0.000 n=10)
EncWriteInt/16-bit-lenencoded-10     2.049n ±  0%    2.049n ± 0%        ~ (p=0.972 n=10)
EncWriteInt/24-bit-lenencoded-10     1.987n ±  0%    2.056n ± 0%   +3.45% (p=0.000 n=10)
EncWriteInt/32-bit-10               0.7819n ±  0%   0.3906n ± 0%  -50.05% (p=0.000 n=10)
EncWriteInt/64-bit-10               1.4080n ±  0%   0.4684n ± 0%  -66.73% (p=0.000 n=10)
EncWriteInt/64-bit-lenencoded-10     3.126n ±  0%    2.051n ± 0%  -34.40% (p=0.000 n=10)
EncWriteZeroes/4-bytes-10           2.5030n ±  0%   0.3123n ± 0%  -87.52% (p=0.000 n=10)
EncWriteZeroes/10-bytes-10          4.3815n ±  0%   0.3120n ± 0%  -92.88% (p=0.000 n=10)
EncWriteZeroes/23-bytes-10          8.4575n ±  0%   0.3124n ± 0%  -96.31% (p=0.000 n=10)
EncWriteZeroes/55-bytes-10         20.8750n ± 10%   0.6245n ± 0%  -97.01%
EncReadInt/16-bit-10                 2.050n ±  0%    2.068n ± 1%   +0.90% (p=0.001 n=10)
EncReadInt/24-bit-10                 2.034n ±  0%    2.050n ± 0%   +0.76% (p=0.000 n=10)
EncReadInt/64-bit-10                 2.819n ±  1%    2.187n ± 0%  -22.41% (p=0.000 n=10)
geomean                              2.500n         0.8363n       -66.55%

Related issue:

#16789

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Copy link
Contributor

vitess-bot bot commented Jul 4, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Jul 4, 2024
@github-actions github-actions bot added this to the v21.0.0 milestone Jul 4, 2024
Copy link

codecov bot commented Jul 4, 2024

Codecov Report

Attention: Patch coverage is 98.00000% with 1 line in your changes missing coverage. Please review.

Project coverage is 68.71%. Comparing base (cb2d0df) to head (2a6a739).
Report is 1 commits behind head on main.

Files Patch % Lines
go/mysql/encoding.go 98.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #16341      +/-   ##
==========================================
- Coverage   68.72%   68.71%   -0.02%     
==========================================
  Files        1547     1547              
  Lines      198267   198317      +50     
==========================================
+ Hits       136264   136271       +7     
- Misses      62003    62046      +43     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mattrobenolt mattrobenolt force-pushed the speedup-mysql-encoding branch 6 times, most recently from c23d96b to c549579 Compare July 5, 2024 00:56
@dbussink
Copy link
Contributor

dbussink commented Jul 5, 2024

You'll need to update the commits and force push to ensure the DCO sign off.

@dbussink
Copy link
Contributor

dbussink commented Jul 5, 2024

@mattrobenolt Is clearing this way faster than using the following?

func writeZeroes(data []byte, pos int, len int) int {
	data = data[pos : pos+len]
	for i := range data {
		data[i] = 0
	}
	return pos + len
}

The Go compiler recognizes that pattern and optimizes it into a CALL runtime.memclrNoHeapPointers(SB) essentially. Is that just as fast or even faster? See also golang/go#5373 for that pattern.

@mattrobenolt
Copy link
Contributor Author

mattrobenolt commented Jul 5, 2024

@dbussink lol so

x_test.go source
package main

import "testing"

func writeZeroesVitessMain(data []byte, pos, len int) int {
	for i := 0; i < len; i++ {
		data[pos+i] = 0
	}
	return pos + len
}

func writeZeroesSpecialized23(data []byte, pos int) int {
	data = data[pos:]

	_ = data[22]
	data[0] = 0
	data[1] = 0
	data[2] = 0
	data[3] = 0
	data[4] = 0
	data[5] = 0
	data[6] = 0
	data[7] = 0
	data[8] = 0
	data[9] = 0
	data[10] = 0
	data[11] = 0
	data[12] = 0
	data[13] = 0
	data[14] = 0
	data[15] = 0
	data[16] = 0
	data[17] = 0
	data[18] = 0
	data[19] = 0
	data[20] = 0
	data[21] = 0
	data[22] = 0

	return pos + 23
}

func writeZeroesSpecialized10(data []byte, pos int) int {
	data = data[pos:]

	_ = data[9]
	data[0] = 0
	data[1] = 0
	data[2] = 0
	data[3] = 0
	data[4] = 0
	data[5] = 0
	data[6] = 0
	data[7] = 0
	data[8] = 0
	data[9] = 0

	return pos + 10
}

func writeZeroesMemclr(data []byte, pos, len int) int {
	end := pos + len
	data = data[pos:end]

	for i := range data {
		data[i] = 0
	}

	return end
}

func BenchmarkZeroes(b *testing.B) {
	buf := make([]byte, 128)

	b.Run("vitess-main/23-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesVitessMain(buf, 16, 23)
		}
	})

	b.Run("vitess-main/10-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesVitessMain(buf, 16, 10)
		}
	})

	b.Run("specialized/23-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesSpecialized23(buf, 16)
		}
	})

	b.Run("specialized/10-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesSpecialized10(buf, 16)
		}
	})

	b.Run("memclr/23-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesMemclr(buf, 16, 23)
		}
	})

	b.Run("memclr/10-byte", func(b *testing.B) {
		for range b.N {
			_ = writeZeroesMemclr(buf, 16, 10)
		}
	})
}

I put them all side by side and this is wild.

$ go test -v . -bench=.
goos: darwin
goarch: arm64
pkg: x
BenchmarkZeroes
BenchmarkZeroes/vitess-main/23-byte
BenchmarkZeroes/vitess-main/23-byte-10          138833698                8.634 ns/op
BenchmarkZeroes/vitess-main/10-byte
BenchmarkZeroes/vitess-main/10-byte-10          268413115                4.471 ns/op
BenchmarkZeroes/specialized/23-byte
BenchmarkZeroes/specialized/23-byte-10          570586390                2.125 ns/op
BenchmarkZeroes/specialized/10-byte
BenchmarkZeroes/specialized/10-byte-10          1000000000               0.6531 ns/op
BenchmarkZeroes/memclr/23-byte
BenchmarkZeroes/memclr/23-byte-10               1000000000               0.3263 ns/op
BenchmarkZeroes/memclr/10-byte
BenchmarkZeroes/memclr/10-byte-10               1000000000               0.3286 ns/op
PASS
ok      x       7.020s

That memclr optimization is really good.

Going to swap that out, which will let me get rid of the specialized versions. I didn't like that anyways.

This employs a couple tricks that combined seemed fruitful:

* Swapping to binary.LittleEndian.Put* on the basic calls gets us a free
  boost while removing code. The main win from this swap is the slice
  boundary check, resulting in a massive boost. I kept it inlined, but
  added my own boundary checking in `writeLenEncInt` since swapping it
  out here resulted in a very minor performance regression from the
  current results. I assume from the extra coersion needed to the uint*
  type, and another reslice.
* Reslicing the byte slice early so all future operations work on
  0-index rather than pos+ indexing. This seemed to be a pretty sizeable
  win without needing to do more addition on every operation later to
  determine the index, they get swapped out for constants.
* Read path employs the same early reslicing, but already has explicit
  bounds checks.
* Rewrite `writeZeroes` to utilize the Go memclr optimization.

```
$ benchstat {old,new}.txt
goos: darwin
goarch: arm64
pkg: vitess.io/vitess/go/mysql
                                 │    old.txt     │               new.txt                │
                                 │     sec/op     │    sec/op     vs base                │
EncWriteInt/16-bit-10               0.4685n ±  0%   0.3516n ± 0%  -24.94% (p=0.000 n=10)
EncWriteInt/16-bit-lenencoded-10     2.049n ±  0%    2.049n ± 0%        ~ (p=0.972 n=10)
EncWriteInt/24-bit-lenencoded-10     1.987n ±  0%    2.056n ± 0%   +3.45% (p=0.000 n=10)
EncWriteInt/32-bit-10               0.7819n ±  0%   0.3906n ± 0%  -50.05% (p=0.000 n=10)
EncWriteInt/64-bit-10               1.4080n ±  0%   0.4684n ± 0%  -66.73% (p=0.000 n=10)
EncWriteInt/64-bit-lenencoded-10     3.126n ±  0%    2.051n ± 0%  -34.40% (p=0.000 n=10)
EncWriteZeroes/4-bytes-10           2.5030n ±  0%   0.3123n ± 0%  -87.52% (p=0.000 n=10)
EncWriteZeroes/10-bytes-10          4.3815n ±  0%   0.3120n ± 0%  -92.88% (p=0.000 n=10)
EncWriteZeroes/23-bytes-10          8.4575n ±  0%   0.3124n ± 0%  -96.31% (p=0.000 n=10)
EncWriteZeroes/55-bytes-10         20.8750n ± 10%   0.6245n ± 0%  -97.01%
EncReadInt/16-bit-10                 2.050n ±  0%    2.068n ± 1%   +0.90% (p=0.001 n=10)
EncReadInt/24-bit-10                 2.034n ±  0%    2.050n ± 0%   +0.76% (p=0.000 n=10)
EncReadInt/64-bit-10                 2.819n ±  1%    2.187n ± 0%  -22.41% (p=0.000 n=10)
geomean                              2.500n         0.8363n       -66.55%
```

Signed-off-by: Matt Robenolt <matt@ydekproductions.com>
@mattrobenolt
Copy link
Contributor Author

@dbussink updated to the memclr optimization as well as benchmarks in the PR description.

@dbussink dbussink added Component: Query Serving Type: Performance and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Jul 6, 2024
@systay systay merged commit d9475d8 into vitessio:main Jul 8, 2024
99 of 100 checks passed
@mattrobenolt mattrobenolt deleted the speedup-mysql-encoding branch July 8, 2024 18:00
@deepthi
Copy link
Member

deepthi commented Jul 8, 2024

Nice work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants