This repository contains proof of concept HTTP/2 implementation in Seastar based on Nghttp2 library. Some key concepts/motivations:
- Usage of Nghttp2 as it's well known, stable and fast library. Implementation of HTTP/2 from the scratch would be much more time-consuming.
- Simple integration with existing HTTP/1.1 implementation in Seastar. To make this implementation as less intrusive as possible whole HTTP/2 handling was put in http2_connection class. Consequently httpd listens on 2 ports - 10000 for HTTP/1.1 connections and 3000 for HTTP/2 connections.
- Both h2c (over TCP) and h2 (over TLS) are supported.
Libraries and frameworks:
- Seastar framework: http://seastar.io/
- Nghttp2 library: https://nghttp2.org/
Compilers:
- C++17 compiler is required
./build/release/apps/httpd/httpd --node=server --debug=true --port=3000
curl --http2-prior-knowledge -G 127.0.0.1:3000
Dumps:
./build/release/apps/httpd/httpd --node=server --debug=true --port=3000
Seastar HTTP/1.1 legacy server listening on port 10000 ...
Seastar HTTP/2 server listening on port 3000 ...
method: GET
path: /
scheme: http
curl --http2-prior-knowledge -G 127.0.0.1:3000
handle /
curl --http2-prior-knowledge --output out_faces.png -G 127.0.0.1:3000/http2rulez.com/public/assets/images/faces.png
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 387k 100 387k 0 0 3652k 0 --:--:-- --:--:-- --:--:-- 3652k
./build/debug/apps/httpd/httpd --node=server --debug=true --port=3000 -c 1
WARNING: debug mode. Not for benchmarking or production
WARN 2018-08-31 15:41:05,184 seastar - Seastar compiled with default allocator, heap profiler not supported
WARN 2018-08-31 15:41:05,206 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
Seastar HTTP/1.1 legacy server listening on port 10000 ...
Seastar HTTP/2 server listening on port 3000 ...
opened /home/yurai/seastar/http2_reload/test_http2//http2rulez.com/public/assets/images/faces.png
remaining body: 396431 chunk size: 16384
remaining body: 380047 chunk size: 16384
remaining body: 363663 chunk size: 16384
remaining body: 347279 chunk size: 16384
remaining body: 330895 chunk size: 16384
remaining body: 314511 chunk size: 16384
remaining body: 298127 chunk size: 16384
remaining body: 281743 chunk size: 16384
remaining body: 265359 chunk size: 16384
remaining body: 248975 chunk size: 16384
remaining body: 232591 chunk size: 16384
remaining body: 216207 chunk size: 16384
remaining body: 199823 chunk size: 16384
remaining body: 183439 chunk size: 16384
remaining body: 167055 chunk size: 16384
remaining body: 150671 chunk size: 16384
remaining body: 134287 chunk size: 16384
remaining body: 117903 chunk size: 16384
remaining body: 101519 chunk size: 16384
remaining body: 85135 chunk size: 16384
remaining body: 68751 chunk size: 16384
remaining body: 52367 chunk size: 16384
remaining body: 35983 chunk size: 16384
remaining body: 19599 chunk size: 16384
remaining body: 3215 chunk size: 3215
new session: 127.0.0.1:54700
state=1
[INFO] C ----------------------------> S (SETTINGS)
state=0
[INFO] C ----------------------------> S (HEADERS)
state=2
state=2
state=2
state=2
state=2
state=2
state=1
[INFO] C ----------------------------> S (HEADERS)
state=5
[INFO] C <---------------------------- S (SETTINGS)
state=5
[INFO] C <---------------------------- S (SETTINGS)
state=5
[INFO] C <---------------------------- S (HEADERS)
state=5
[INFO] C <---------------------------- S (DATA)
state=4
state=1
[INFO] C ----------------------------> S (SETTINGS)
state=1
[INFO] C ----------------------------> S (GOAWAY)
./build/release/apps/httpd/httpd --node=server --tls=false --debug=false --port=3000
./build/release/apps/httpd/httpd --node=client --tls=false --con=500 --req=4000
Performance tests executed only per one shard on one machine and comparision with old implementation
HTTP/2 server
./build/release/apps/httpd/httpd --node=server --tls=false --debug=false --port=3000 -c 1
./build/release/apps/httpd/httpd --node=client --tls=false --con=500 --req=4000 -c 1
WARN 2018-09-24 15:53:31,968 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
established tcp connections
Total responses: 2000000
Req/s: **263739**
Avg resp time: 3.79162 us
Legacy HTTP/1.1 server
./build/release/apps/httpd/httpd --node=server --tls=false --debug=false --port=3000 -c 1
./build/release/apps/seawreck/seawreck --smp 1 --server 127.0.0.1:3000
WARN 2018-06-09 14:15:33,693 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
========== http_client ============
Server: 127.0.0.1:3000
Connections: 100
Requests/connection: dynamic (timer based)
Requests on cpu 0: 624597
Total cpus: 1
Total requests: 624597
Total time: 10.003952
Requests/sec: **62435.024867**
========== done ============
Results are very promising - in those syntactic benchmarks HTTP/2 server achieve almost 400% of legacy HTTP/1.1 implementation throughput. More details will be provided soon.
./build/release/apps/httpd/httpd --node=server --tls=false --debug=false --port=3000 -c 2
./build/release/apps/httpd/httpd --node=client --tls=false --con=500 --req=10000 -c 2
WARN 2018-09-24 15:58:41,125 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
established tcp connections
established tcp connections
Total responses: 10000000
Req/s: **422908**
Avg resp time: 2.36458 us
Number of shards and throughput increased 2 times which can suggest linear scalability and makes me happy.
TODO