Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: reduce overhead #93

Merged
merged 10 commits into from
Nov 12, 2023
Merged

perf: reduce overhead #93

merged 10 commits into from
Nov 12, 2023

Conversation

tsctx
Copy link
Contributor

@tsctx tsctx commented Nov 12, 2023

Hi @yusukebe,
I saw your post on X (Twiiter) and tried to optimize it.
For performance improvement:

  1. use writeHead instead of setHeader.
  2. Quit changing ReadableStream to Readable. (Do not call Readable.fromWeb)
  3. Do not use Readable.pipeline.
  4. use Response#arrayBuffer instead of Response#text and Buffer.byteLength
  5. There is no need to use Headers#getSetCookie since it is already split.

In other words, this change reduces the overhead of the handle portion of the body.

  • main
    incomingMessage => Request => handling => Response => ReadableStream => Readable => outgoingMessage
  • this PR
    incomingMessage => Request => handling => Response => ReadableStream => outgoingMessage

@tsctx
Copy link
Contributor Author

tsctx commented Nov 12, 2023

@yusukebe
If you don't mind, could you measure the benchmark?

@yusukebe
Copy link
Member

Hi @tsctx!

Super cool! The five changes you made all seem good. The tests passed, and some real-world examples with this version, including streaming responses, work fine.

The benchmark I'm taking is super simple, just a "Hello World" application:

import { serve } from '@hono/node-server'
import { Hono } from 'hono'

const app = new Hono()
app.get('/', (c) => c.text('Hi'))

serve(app)

And I access it with Bombardier using this command:

bombardier -d 10s --fasthttp http://localhost:3000

The results:

Screenshot 2023-11-12 at 20 39 41

It really has become faster! In this case, it does not use streaming, so we may have to test another pattern, but currently, this simple pattern is a good reference.

I'd like to merge it into the "main". Do you have any more changes?

src/util.ts Outdated Show resolved Hide resolved
@tsctx
Copy link
Contributor Author

tsctx commented Nov 12, 2023

Hi @tsctx!

Super cool! The five changes you made all seem good. The tests passed, and some real-world examples with this version, including streaming responses, work fine.

The benchmark I'm taking is super simple, just a "Hello World" application:

import { serve } from '@hono/node-server'
import { Hono } from 'hono'

const app = new Hono()
app.get('/', (c) => c.text('Hi'))

serve(app)

And I access it with Bombardier using this command:

bombardier -d 10s --fasthttp http://localhost:3000

The results:

Screenshot 2023-11-12 at 20 39 41 It really has become faster! In this case, it does not use streaming, so we may have to test another pattern, but currently, this simple pattern is a good reference.

I'd like to merge it into the "main". Do you have any more changes?

You may merge them.
If you benchmark your storming, please let me know the results!

@yusukebe
Copy link
Member

Thanks! Okay, let's ship it.

If you have any ideas for improvement, we'd be glad if you created an issue or PR!

@yusukebe yusukebe merged commit 0c5979f into honojs:main Nov 12, 2023
3 checks passed
@tsctx tsctx deleted the perf/reduce-overhead branch November 12, 2023 12:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants