-
Notifications
You must be signed in to change notification settings - Fork 725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client: Support TSO RPC Parallelizing #8432
Labels
type/development
The issue belongs to a development tasks
Comments
This was referenced Jul 23, 2024
ti-chi-bot bot
added a commit
that referenced
this issue
Jul 29, 2024
…g and metrics reporting code (#8433) ref #8432 client: Merge the two tsoStream types to reuse the same error handling and metrics reporting code This commit merges the two `xxxTSOStream` types so that the error handling and metrics reporting logic for PD server deployment and TSO service deployment can be reused. Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com> Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
ti-chi-bot bot
added a commit
that referenced
this issue
Sep 14, 2024
ref #8432 client: Make tsoStream receives asynchronously. This makes it possible to allow the tsoDispatcher send multiple requests and wait for their responses concurrently. Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com> Co-authored-by: ti-chi-bot[bot] <108142056+ti-chi-bot[bot]@users.noreply.github.com>
This was referenced Sep 14, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Development Task
We used to find that in some OLTP workloads where the QPS is high and the queries are simple, the TSO Wait duration usually become a significant portion of the total duration of queries. In TiDB, TSO loading is already made concurrent with some other works such as compiling. In cases that the queries are simple, it would be hard to further optimize it by making it concurrent with more phases of the SQL execution. But we found a practical way to optimize it is to do it from the TSO client.
Currently, a TSO client object has a goroutine that collects
GetTS
(andGetTSAsync
) calls (tsoRequest
s) as a batch, send it to PD, wait for the response, and dispatch the results to thesetsoRequests
, serially. As a result, eachGetTS
calls may need to spend up to 1x TSO RPC time to wait for being collected to the next batch.Considering the case that PD's TSO allocator is not the bottle neck and can deal with more TSO requests (so that the majority part of TSO RPC's time cost is on the network), we find that it's possible to start collecting the next batch and send it before receiving the response of the previous batch. So that each
GetTS
call needs to wait for less time to be batched, and gets a shorter total duration.So this is an approach that reduces the duration of
GetTS
&GetTSAsync - Wait
at the expense of higher TSO RPC OPS and higher pressure to PD. It's not suitable to be enabled by default, but we can provide such an option when the TSO Wait duration becomes a problem.Subtasks
Side changes:
xxxTSOStream
types so that the error handling and metrics reporting logic for PD server deployment and TSO service deployment can be reused. client: Merge the two tsoStream types to reuse the same error handling and metrics reporting code #8433tsoStream
into separated goroutinestsoDispatcher
support batching according to estimated TSO RPC duration fromtsoStream
The text was updated successfully, but these errors were encountered: