Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extend buf handle size #32

Closed
wants to merge 1 commit into from
Closed

extend buf handle size #32

wants to merge 1 commit into from

Conversation

00k
Copy link
Collaborator

@00k 00k commented Oct 22, 2015

issue #31
主要改动:

  1. tran_buf_pool的malloc函数增加factor参数,作为SOFA_PBRPC_TRAN_BUF_BLOCK_SIZE的系数
  2. BufHandle的buffer大小改为不固定
  3. WriteBuffer扩展时新分配的tranbuf大小按照指数增长(1k、2k、4k、...),直到最大值(32k)

@qinzuoyan
Copy link
Collaborator

我觉得将block_size放到block的头部位置(ref_counter之后)会更合理

@00k
Copy link
Collaborator Author

00k commented Oct 22, 2015

size放到头部对WriteBuffer是没问题的,但对于ReadBuffer有可能一个block被分到两个BufHandle里(message切分),这样BufHandle::data指向的是block的中间位置,这时候头部的size就没用了。

@qinzuoyan
Copy link
Collaborator

“BufHandle::data指向的是block的中间位置”——你理解有误,BufHandle::data永远都是指向block的起始位置,使用data+offset指向数据的起始位置。
我这边也在改进和优化,完成后我发出来看看。

@qinzuoyan
Copy link
Collaborator

优化已经完成,参见 0dbfbbd8acfe521736cd3

主要优化点:

  • 在block的隐藏头部中增加block_size字段;WriteBuffer扩展时新分配的tranbuf大小按照指数增长(1k、2k、4k、...),直到最大值(32k),在“降低大数据传输的延迟”和“避免memory浪费”之间进行权衡;ReadBuffer由于在多个buffer之间可以共享block,不会有memory浪费,所以在rpc_message_stream中接收数据时总是使用最大值(32K)。
  • 设置tcp_no_delay,避免因为Nagle's Algorithm造成最后一个包的额外延迟。参见http://www.cnblogs.com/polymorphism/archive/2012/12/10/High_Latency_for_Small_Size_Entities_in_Table_Service.html 。但是这会使qps有少量下降,可以通过SOFA_PBRPC_TCP_NO_DELAY宏来选择是否开启该功能(默认开启)。

优化结果(通过运行test/perf_test/test_delay.sh测试得到):

单条请求携带数据量0.1KB1KB10KB100KB1MB10MB
优化前延迟(单位微秒)5674799423153553439124989
优化后延迟(单位微秒)5357186533378225234
- 对于大数据,延迟降低明显 - 10KB的明显提升是因为设置了tcp_no_delay

@qinzuoyan qinzuoyan closed this Oct 27, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants