Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new request parser #819

Merged
merged 1 commit into from
Jan 9, 2022
Merged

Use new request parser #819

merged 1 commit into from
Jan 9, 2022

Conversation

anuragsoni
Copy link
Contributor

@anuragsoni anuragsoni commented Jan 7, 2022

Hello,

I'll preface this by saying that this is far from being ready, and I've had to comment/break a lot of things to get a minimal server example working.

Background

I've been working on a HTTP request (and chunked body) parser and prototype for a HTTP 1.1 server and I decided to use the Cohttp types as the base layer for my explorations. The server experiments in shuttle have shown promise, but instead of adding yet another server library I'd like to explore if it's possible to integrate some of that work directly into Cohttp. I'm opening this really early in the process to see if an approach similar to what I'm proposing here will fit well with the Cohttp maintainer's vision (and to hear if there are suggestions on improving the IO layer even further).

I am starting with cohttp-lwt-unix first as that's the cohttp backend I use at work, and I wanted to try my work on a non async backend to validate that the work in shuttle can be ported to non async libraries as well. All of the new code in this diff is a direct port of implementations from the shuttle library.

I am using the test payload referenced in #328 to measure progress.

Disclaimer: All of these test runs were made on my laptop, and if someone has a better hardware setup for such benchmarking then I'd appreciate getting some more measurements from there.

Library Versions: Cohttp (the version in the current trunk as of January 6 2022), LWT (5.5.0), OCaml (4.13.1)

I ran an initial set of tests using wrk2 using 1000 active connections and two different request rates (80_000, and 100_000). The results from the runs on my laptop can be seen at https://gist.github.com/anuragsoni/eb095a7f4c3aec1f4ba65b9866d9f9d3 . The summary from that gist is:

  • Cohttp_lwt_unix from the current trunk had trouble getting more than 60,000 requests per second and had average latencies around 4-7 seconds with tail latencies > 10 seconds at times
  • Cohttp_lwt_unix with the new request parser and still using Lwt_io via conduit was able to comfortable serve 80_000 RPS with average latency around 2ms and tail latencies around 7ms. When attempting to server 100_000 RPS the average latencies degraded to 1.2 seconds with tail latencies of 2 seconds
  • As an experiment I tried to run cohttp-lwt-unix server that doesn't use conduit + Lwt_io for the read path, and instead just uses Lwt_unix.file_descr paired with the buffer that's part of this PR. With that the server was able to handle 100_000. RPS with average latencies of 2ms with tail latencies of 7.6ms - link here
  • Httpaf_lwt_unix has been used as the benchmark here and it can easily server 100_000 rps with an average latency of 2.5ms with a tail of 9ms

Apologies for wall of text, but I wanted to start a discussion even if I only have a partially working solution so far, as I'd like to get some insights into whether the proposed implementation here makes sense as a starting point, and I'm also hoping to learn how best to test/benchmark such work so we can have an accurate picture of any potential improvements.

Thanks,
Anurag

@rgrinberg
Copy link
Member

As an experiment I tried to run cohttp-lwt-unix server that doesn't use conduit + Lwt_io for the read path, and instead just uses Lwt_unix.file_descr paired with the buffer that's part of this PR. With that the server was able to handle 100_000. RPS with average latencies of 2ms with tail latencies of 7.6ms - link here

If I understand correctly, the slow down is because Lwt_io allocates an additional bigarray?

cohttp/src/s.ml Outdated Show resolved Hide resolved
@rgrinberg
Copy link
Member

Httpaf_lwt_unix has been used as the benchmark here and it can easily server 100_000 rps with an average latency of 2.5ms with a tail of 9ms

So what was the limit for Httpaf here for reference? In other words, how much further can it push past 100k with decent latency

?pos:int -> ?len:int -> string -> (Http.Request.t * int, error) result

val parse_chunk_length :
?pos:int -> ?len:int -> string -> (int * int, error) result
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have used int for Chunk lengths as the initial work that led to this parser made an assumption that we'll always run on 64bit systems. With that assumption the int would be enough to represent the largest chunk size that's allowed by the parser.

Comment on lines 19 to 24
val parse_chunk :
?pos:int ->
?len:int ->
string ->
chunk_kind ->
(chunk_parser_result * int, error) result
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intention was to not have to use parse_chunk_length manually, and instead go via the parse_chunk which helps orchestrate threading through the chunk body state

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll also note that I don't make use of any of the chunked parsing code in this PR and I'm limiting this diff to just the request (upto the headers) parsing for now

@@ -0,0 +1,24 @@
open Lwt.Infix

type t = { buf : Bytebuffer.t; chan : Lwt_io.input_channel }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a buffer layer alongside Lwt_io as it helps in driving the core IO layer that needs access to the internal view within the buffer to feed data to the new parser.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before this is considered ready, we might want a way to grow the buffer too I suppose (with an upper bound beyond which its considered an error maybe?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you expand on this a bit more. When does the buffer need to grow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it needs to grow as such as long we pick a value that we feel is reasonable to parse the request + headers. But I noticed that the body decoding implementation in cohttp references that it'd like to read up to a max of 32k bytes at a time when reading chunked bodies

(* read between 0 and 32Kbytes of a chunk *)
So I figured if we want to start with a smaller buffer than that, we'll need to grow it if we still want to eventually read 32k bytes when we get to bodies. Or we could just pick a large enough buffer to start with and never touch it again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

32K sounds like a reasonable maximum. I suppose we should determine the default buffer size experimentally and let the user customize it if need be.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference, I went back through the history to find the PR that first introduced the limit: #330 Looks like at that stage it was arbitrary. I think it would be nice to make it configurable, something similar to Lwt_io.set_default_buffer_size io_buf

@rgrinberg
Copy link
Member

Is the intention to make http-parser a public API? If that's the case, a package is appropriate. But if it's not, we need to find a new place for it to live (either in http or cohttp).

@anuragsoni
Copy link
Contributor Author

So what was the limit for Httpaf here for reference? In other words, how much further can it push past 100k with decent latency

On my laptop its able to go up to 125k with decent latency. With a run of 150k rps it degraded into average latency 2.4 seconds

@anuragsoni
Copy link
Contributor Author

Is the intention to make http-parser a public API? If that's the case, a package is appropriate. But if it's not, we need to find a new place for it to live (either in http or cohttp).

I am not fully sure about this. I think I'd be fine with both a public package, or something that lives within either http or cohttp.

@@ -0,0 +1,43 @@
open Cohttp_lwt_unix

let text =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the lwt server I'm using to benchmark cohttp

@rgrinberg
Copy link
Member

I am not fully sure about this. I think I'd be fine with both a public package, or something that lives within either http or cohttp.

Do you think the current API is stable? If not, then we should probably include it in Http under a Private module. Once it stabilizes, we can promote it into a separate package, library, etc.

@@ -102,6 +102,8 @@ module IO = struct
Writer.write oc buf;
return ())

let refill _ = failwith "Not implemented yet"
let with_input_buffer _ = failwith "not implemented yet"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking about how to implement this, and I think it would actually make things easier if Byte_buffer was Bigstring based or at least functorized on the buffer type. Reason being is that I don't see how to implement these functions in an efficient manner for Async's Reader for example. I'd imagine we want to use read_one_chunk_at_a_time to implement with_input_buffer. I understand how bigstrings don't are a wash performance wise, but it helps keep compatibility with existing implementations. Perhaps we could functorize Byte_buffer to make it possible to continue experimenting in Shuttle/Fiber without normal Bytes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree but if we want to keep it simple we could check how big of a performace hit would be to just use bigstring

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving to bigstrings will also mean moving the parser to bigstrings. Otherwise we pay the price of allocating more strings?

I was thinking of implementing the async backend in a similar manner as the lwt where there is a byte buffer paired with the reader.

That said, now that I think about it there is going to be some breaking changes in the sense that people using the expert response in servers will need to adapt to the switch to the new input type in the IO backends.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving to bigstrings will also mean moving the parser to bigstrings. Otherwise we pay the price of allocating more strings?

Yes but that is very localized to the Source module, no? In any case it was just a curiosity, and as you mentioned you also already have an idea to make this a non-issue

That said, now that I think about it there is going to be some breaking changes in the sense that people using the expert response in servers will need to adapt to the switch to the new input type in the IO backends.

That is the last of the problems. We are allowed to break things in cohttp now, as long as we can justify it. Imho the speedup and latency numbers shown here would justify the breakage

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

string is the way to go in OCaml 5.0, since it's both non-moving and allocated via malloc (all large ocaml objects are mallocated in multicore)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably deal with this in another manner by pushing the buffer to the core cohttp library, and then the refill becomes a way for the IO layers to fill the buffer managed by cohttp. That will let us keep the IO types for the IO layers the same as they are today, while still benefiting from the buffering we need to drive the parser.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a huge fan of moving a buffer to the core library as that removes one benefit for libraries like shuttle that could otherwise drive the whole cohttp server machinery with just the 1 buffer that's used by its channel implementation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That will let us keep the IO types for the IO layers the same as they are today, while still benefiting from the buffering we need to drive the parser.

Note that we don't necessarily need to retain the same IO types. For example, if we could just create Reader.t/Writer.t on the fly when the user provides Expert that would be good enough. Though I'm not sure if it's at all possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a huge fan of moving a buffer to the core library as that removes one benefit for libraries like shuttle that could otherwise drive the whole cohttp server machinery with just the 1 buffer that's used by its channel implementation.

Yeah, it's not worth it. If we must break the current api and make Reader/Writer impossible to use, then so be it.

Copy link
Contributor Author

@anuragsoni anuragsoni Jan 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we could just create Reader.t/Writer.t on the fly when the user provides Expert that would be good enough.

This should be doable. There will be some awkwardness for sure, but we can definitely get something working. We only need to recreate the reader since I haven't switched the writer in the existing implementation. pseudo code for how I'd do this in async

let reader_of_in_channel t =
   let unconsumed_payload = Bytebuffer.unconsumed_data t.buf in
   let reader_pipe = Reader.pipe t.reader in
   Unix.pipe ~info:(Info.create ...)
  >>| fun (`Reader rd, `Writer wr) ->
        let new_reader = Reader.create rd in
        let writer = Writer.create wr in
       Writer.write writer unconsumed_payload;
        don't_wait_for (
           Writer.transfer writer reader_pipe ~stop:(Reader.close_finished new_reader) (fun s -> Writer.write writer s)
           >>= fun () -> Writer.close writer );
      new_reader

@@ -0,0 +1,87 @@
open Core
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to think about how to remove some of this duplication as this is very similar to the module proposed for the lwt backend

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the differences preventing you from making this generic? It seems like you just need a function over a monad + reader.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no difference. This can be moved to something that's instantiated using the IO functor (like we do for other modules like request/response)

@anuragsoni
Copy link
Contributor Author

anuragsoni commented Jan 7, 2022

I pushed an async update as well. But unlike the lwt backend I see no difference in perf between the old implementation and the new implementation 😞 I'll spend more time on this to try and understand what's happening.

@anuragsoni
Copy link
Contributor Author

anuragsoni commented Jan 7, 2022

I reworked the async server loop and now I can see improvement when compared to the existing cohttp implementation. With the older cohttp-async server when I attempt to target 80_000 RPS with 1000 active clients I have an average latency of 8 seconds with tails of 12 seconds. With the new implementation the average latency drops to 2.3 seconds with tails of 3.5 seconds. This is still quite a way behind the lwt implementation.

That said, I think on the async backend the current limiting factor might be the conduit setup. I wrote another async backend for cohttp using shuttle https://github.com/anuragsoni/shuttle/blob/b296ac77f590735ea56d94ef19ae503251dcd317/shuttle_cohttp/shuttle_cohttp.ml and with that backend the average latency when serving 80_000 RPS is 6.9ms with tails of 16ms . These numbers are closer to what the new lwt backend gets here.

All of the async tests were run on macOS where it uses the select backend (the lwt tests were with libev so it probably picks up kqueue on macOS?), and the async backend might perform a little better when run on linux where it has access to epoll.

@mseri
Copy link
Collaborator

mseri commented Jan 7, 2022

I think you can use the libuv backend now on macos. It is experimental but in my tests it has been working pretty reliably

@anuragsoni
Copy link
Contributor Author

I think you can use the libuv backend now on macos. It is experimental but in my tests it has been working pretty reliably

I will try that! That said the libev backend for lwt seems to be working well so I'm not expecting much difference between that and libuv since the current state of libuv support in lwt is mostly for polling file descriptors for IO readiness only (same as libev). I'd be more interested in seeing test runs of async on a linux system so we can try the epoll backend for it :)

http/src/http.ml Outdated
Source.advance source 1;
stop := true)
else (
stop := true;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can make stop := true outside of the if.

@anuragsoni
Copy link
Contributor Author

@rgrinberg @mseri I should've asked this before I rebased. How would you like me to sync with new changes from the main branch? I'm happy to rebase but if merge commits will make it easier to review new changes I can switch to that for new updates.

@rgrinberg
Copy link
Member

rgrinberg commented Jan 7, 2022 via email

@mseri
Copy link
Collaborator

mseri commented Jan 7, 2022

I agree, go wild

@mseri
Copy link
Collaborator

mseri commented Jan 8, 2022

Is this expected?

(cd _build/default/cohttp-lwt-unix/test && ./test_sanity.exe)
[DEBUG][application]: Cohttp debugging output is active
[ERROR][cohttp.lwt.server]: Unhandled exception: Lwt_io.Channel_closed("input")
.....E.
==============================================================================
Error: Cohttp Server Test:5:expert response

(Failure "Test produced 1 log messages at level >= warn")
------------------------------------------------------------------------------
Ran: 7 tests in: 0.04 seconds.
FAILED: Cases: 7 Tried: 7 Errors: 1 Failures: 0 Skip:  0 Todo: 0 Timeouts: 0.

I think it happens when a consumer reads from a channel that has already been exhausted or that has been closed for other reasons (timeouts?)

@mseri
Copy link
Collaborator

mseri commented Jan 8, 2022

When expert mode was introduced this was handled by fee808a (See the commentary on #647 (comment))

@anuragsoni
Copy link
Contributor Author

The CI failure on ocaml-ci is because of a test for http parser. I was using int to represent chunk length, but one of the test uses a value that is too large for int when running on 32 bit systems.

@rgrinberg
Copy link
Member

Do you think it's acceptable to change to int64?

@anuragsoni
Copy link
Contributor Author

Do you think it's acceptable to change to int64?

Yes. Using int64 for chunk length should be fine.

@anuragsoni
Copy link
Contributor Author

anuragsoni commented Jan 9, 2022

I factored out the buffer implementation from cohttp-lwt-unix, and I now have the mirage backend compiling with the new parser implementation. I believe that with this last commit, unless I've missed it, I don't have anything else that's still commented/left unfinished, so I guess this PR can be considered not a draft.

The mirage backend is the piece I'm least confident about as I don't have much experience with it, and I don't see any existing tests for it within the repository.

For the Async and lwt-unix backend I think I have an idea about how we can still keep the same public api by ensuring that the callback needed for the Expert response still uses Reader.t or Lwt_io.t (see comment #819 (comment)), but I'm not sure how to achieve the same for the mirage backend.

@anuragsoni anuragsoni marked this pull request as ready for review January 9, 2022 04:52
* Hand written http request/response parser
* Far more efficient IO layer
@rgrinberg
Copy link
Member

Thanks for rewriting half of cohttp! I'll merge it now and we can solve the remaining issues in separate PR's.

@samoht @dinosaure @hannesm could you have a look/optimize the mirage backend? The mirage should also be able to reap the performance benefits.

@rgrinberg rgrinberg merged commit 06d2127 into mirage:master Jan 9, 2022
@anuragsoni
Copy link
Contributor Author

I did a quick test with cohttp mirage today. I used the macOS backend as I don't have an easy access to a linux machine at the moment.

wrk2 -t2 -c1000 -R 80000 --latency "http://localhost:8080" -d 30s

wrk bench against cohttp-mirage 5.0.0
Initialised 2 threads in 0 ms.
Running 30s test @ http://localhost:8080
  2 threads and 1000 connections
  Thread calibration: mean lat.: 1707.276ms, rate sampling interval: 6586ms
  Thread calibration: mean lat.: 1707.088ms, rate sampling interval: 6586ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.78s     1.79s   11.26s    58.17%
    Req/Sec    25.87k     6.93    25.88k    25.00%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    6.78s 
 75.000%    8.33s 
 90.000%    9.26s 
 99.000%    9.95s 
 99.900%   10.74s 
 99.990%   11.04s 
 99.999%   11.21s 
100.000%   11.27s 

  Detailed Percentile spectrum:
       Value   Percentile   TotalCount 1/(1-Percentile)

    3583.999     0.000000            1         1.00
    4313.087     0.100000        91013         1.11
    4927.487     0.200000       181366         1.25
    5550.079     0.300000       271885         1.43
    6164.479     0.400000       362114         1.67
    6782.975     0.500000       452952         2.00
    7090.175     0.550000       498130         2.22
    7397.375     0.600000       543526         2.50
    7708.671     0.650000       588484         2.86
    8015.871     0.700000       633877         3.33
    8327.167     0.750000       679194         4.00
    8478.719     0.775000       701774         4.44
    8634.367     0.800000       724806         5.00
    8790.015     0.825000       747399         5.71
    8945.663     0.850000       769649         6.67
    9101.311     0.875000       793039         8.00
    9175.039     0.887500       803666         8.89
    9256.959     0.900000       815683        10.00
    9330.687     0.912500       826238        11.43
    9404.415     0.925000       837298        13.33
    9486.335     0.937500       849214        16.00
    9527.295     0.943750       855257        17.78
    9560.063     0.950000       860116        20.00
    9601.023     0.956250       866107        22.86
    9641.983     0.962500       871998        26.67
    9682.943     0.968750       877757        32.00
    9699.327     0.971875       880092        35.56
    9723.903     0.975000       883610        40.00
    9740.287     0.978125       885882        45.71
    9764.863     0.981250       889004        53.33
    9789.439     0.984375       891641        64.00
    9805.823     0.985938       892816        71.11
    9838.591     0.987500       893844        80.00
    9912.319     0.989062       895295        91.43
    9986.047     0.990625       896624       106.67
   10067.967     0.992188       898031       128.00
   10117.119     0.992969       898780       142.22
   10166.271     0.993750       899456       160.00
   10223.615     0.994531       900185       182.86
   10280.959     0.995313       900847       213.33
   10346.495     0.996094       901569       256.00
   10379.263     0.996484       901925       284.44
   10420.223     0.996875       902305       320.00
   10461.183     0.997266       902621       365.71
   10518.527     0.997656       903012       426.67
   10567.679     0.998047       903342       512.00
   10600.447     0.998242       903542       568.89
   10625.023     0.998437       903676       640.00
   10665.983     0.998633       903878       731.43
   10698.751     0.998828       904043       853.33
   10739.711     0.999023       904213      1024.00
   10764.287     0.999121       904318      1137.78
   10788.863     0.999219       904397      1280.00
   10813.439     0.999316       904487      1462.86
   10838.015     0.999414       904574      1706.67
   10862.591     0.999512       904657      2048.00
   10878.975     0.999561       904711      2275.56
   10895.359     0.999609       904756      2560.00
   10903.551     0.999658       904783      2925.71
   10919.935     0.999707       904823      3413.33
   10944.511     0.999756       904879      4096.00
   10952.703     0.999780       904895      4551.11
   10969.087     0.999805       904929      5120.00
   10977.279     0.999829       904945      5851.43
   10985.471     0.999854       904958      6826.67
   11018.239     0.999878       904981      8192.00
   11034.623     0.999890       904994      9102.22
   11051.007     0.999902       905005     10240.00
   11067.391     0.999915       905013     11702.86
   11083.775     0.999927       905027     13653.33
   11100.159     0.999939       905035     16384.00
   11116.543     0.999945       905042     18204.44
   11124.735     0.999951       905047     20480.00
   11132.927     0.999957       905050     23405.71
   11149.311     0.999963       905056     27306.67
   11165.695     0.999969       905063     32768.00
   11173.887     0.999973       905068     36408.89
   11173.887     0.999976       905068     40960.00
   11182.079     0.999979       905071     46811.43
   11190.271     0.999982       905074     54613.33
   11198.463     0.999985       905077     65536.00
   11198.463     0.999986       905077     72817.78
   11198.463     0.999988       905077     81920.00
   11206.655     0.999989       905079     93622.86
   11214.847     0.999991       905080    109226.67
   11231.231     0.999992       905082    131072.00
   11231.231     0.999993       905082    145635.56
   11239.423     0.999994       905083    163840.00
   11247.615     0.999995       905084    187245.71
   11247.615     0.999995       905084    218453.33
   11255.807     0.999996       905086    262144.00
   11255.807     0.999997       905086    291271.11
   11255.807     0.999997       905086    327680.00
   11255.807     0.999997       905086    374491.43
   11255.807     0.999998       905086    436906.67
   11263.999     0.999998       905087    524288.00
   11263.999     0.999998       905087    582542.22
   11263.999     0.999998       905087    655360.00
   11263.999     0.999999       905087    748982.86
   11263.999     0.999999       905087    873813.33
   11272.191     0.999999       905088   1048576.00
   11272.191     1.000000       905088          inf
#[Mean    =     6784.695, StdDeviation   =     1792.427]
#[Max     =    11264.000, Total count    =       905088]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  1493513 requests in 30.00s, 2.91GB read
  Socket errors: connect 0, read 4294, write 12, timeout 0
Requests/sec:  49781.00
Transfer/sec:     99.41MB
wrk bench against cohttp-mirage pinned to commit 06d2127 (the new parser)
Initialised 2 threads in 0 ms.
Running 30s test @ http://localhost:8080
  2 threads and 1000 connections
  Thread calibration: mean lat.: 841.765ms, rate sampling interval: 3358ms
  Thread calibration: mean lat.: 841.993ms, rate sampling interval: 3358ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.50s   934.19ms   5.17s    57.79%
    Req/Sec    32.60k    44.57    32.66k    40.00%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    3.50s 
 75.000%    4.31s 
 90.000%    4.80s 
 99.000%    5.10s 
 99.900%    5.14s 
 99.990%    5.16s 
 99.999%    5.17s 
100.000%    5.17s 

  Detailed Percentile spectrum:
       Value   Percentile   TotalCount 1/(1-Percentile)

    1847.295     0.000000            1         1.00
    2217.983     0.100000       114641         1.11
    2535.423     0.200000       228475         1.25
    2854.911     0.300000       342500         1.43
    3178.495     0.400000       456837         1.67
    3500.031     0.500000       570689         2.00
    3661.823     0.550000       627566         2.22
    3823.615     0.600000       684763         2.50
    3987.455     0.650000       742185         2.86
    4151.295     0.700000       799156         3.33
    4313.087     0.750000       855709         4.00
    4395.007     0.775000       884270         4.44
    4476.927     0.800000       913599         5.00
    4558.847     0.825000       942403         5.71
    4640.767     0.850000       971157         6.67
    4722.687     0.875000       999184         8.00
    4763.647     0.887500      1013301         8.89
    4804.607     0.900000      1027709        10.00
    4845.567     0.912500      1041969        11.43
    4886.527     0.925000      1055969        13.33
    4927.487     0.937500      1070119        16.00
    4947.967     0.943750      1077293        17.78
    4968.447     0.950000      1084506        20.00
    4988.927     0.956250      1091739        22.86
    5009.407     0.962500      1098939        26.67
    5029.887     0.968750      1106206        32.00
    5038.079     0.971875      1109132        35.56
    5050.367     0.975000      1113496        40.00
    5058.559     0.978125      1116363        45.71
    5070.847     0.981250      1120601        53.33
    5079.039     0.984375      1123380        64.00
    5087.231     0.985938      1126084        71.11
    5091.327     0.987500      1127401        80.00
    5095.423     0.989062      1128677        91.43
    5103.615     0.990625      1131101       106.67
    5107.711     0.992188      1132259       128.00
    5111.807     0.992969      1133367       142.22
    5115.903     0.993750      1134435       160.00
    5119.999     0.994531      1135452       182.86
    5124.095     0.995313      1136386       213.33
    5128.191     0.996094      1137288       256.00
    5128.191     0.996484      1137288       284.44
    5132.287     0.996875      1138087       320.00
    5132.287     0.997266      1138087       365.71
    5136.383     0.997656      1138792       426.67
    5136.383     0.998047      1138792       512.00
    5140.479     0.998242      1139407       568.89
    5140.479     0.998437      1139407       640.00
    5140.479     0.998633      1139407       731.43
    5144.575     0.998828      1139916       853.33
    5144.575     0.999023      1139916      1024.00
    5144.575     0.999121      1139916      1137.78
    5148.671     0.999219      1140293      1280.00
    5148.671     0.999316      1140293      1462.86
    5148.671     0.999414      1140293      1706.67
    5152.767     0.999512      1140550      2048.00
    5152.767     0.999561      1140550      2275.56
    5152.767     0.999609      1140550      2560.00
    5152.767     0.999658      1140550      2925.71
    5152.767     0.999707      1140550      3413.33
    5156.863     0.999756      1140712      4096.00
    5156.863     0.999780      1140712      4551.11
    5156.863     0.999805      1140712      5120.00
    5156.863     0.999829      1140712      5851.43
    5156.863     0.999854      1140712      6826.67
    5160.959     0.999878      1140802      8192.00
    5160.959     0.999890      1140802      9102.22
    5160.959     0.999902      1140802     10240.00
    5160.959     0.999915      1140802     11702.86
    5160.959     0.999927      1140802     13653.33
    5165.055     0.999939      1140854     16384.00
    5165.055     0.999945      1140854     18204.44
    5165.055     0.999951      1140854     20480.00
    5165.055     0.999957      1140854     23405.71
    5165.055     0.999963      1140854     27306.67
    5165.055     0.999969      1140854     32768.00
    5165.055     0.999973      1140854     36408.89
    5165.055     0.999976      1140854     40960.00
    5169.151     0.999979      1140877     46811.43
    5169.151     0.999982      1140877     54613.33
    5169.151     0.999985      1140877     65536.00
    5169.151     0.999986      1140877     72817.78
    5169.151     0.999988      1140877     81920.00
    5169.151     0.999989      1140877     93622.86
    5169.151     0.999991      1140877    109226.67
    5169.151     0.999992      1140877    131072.00
    5169.151     0.999993      1140877    145635.56
    5169.151     0.999994      1140877    163840.00
    5169.151     0.999995      1140877    187245.71
    5169.151     0.999995      1140877    218453.33
    5169.151     0.999996      1140877    262144.00
    5169.151     0.999997      1140877    291271.11
    5169.151     0.999997      1140877    327680.00
    5169.151     0.999997      1140877    374491.43
    5169.151     0.999998      1140877    436906.67
    5169.151     0.999998      1140877    524288.00
    5173.247     0.999998      1140879    582542.22
    5173.247     1.000000      1140879          inf
#[Mean    =     3503.824, StdDeviation   =      934.188]
#[Max     =     5169.152, Total count    =      1140879]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  1890298 requests in 30.00s, 3.69GB read
Requests/sec:  63009.22
Transfer/sec:    125.83MB

@anuragsoni anuragsoni deleted the new-parser branch January 9, 2022 17:35
@anuragsoni anuragsoni mentioned this pull request Jan 9, 2022
@mseri mseri mentioned this pull request Jan 10, 2022
bikallem pushed a commit to bikallem/ocaml-cohttp that referenced this pull request Jan 24, 2022
* We only have a new parser for requests for now
* We only touched the read path with mirage#819
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 24, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 24, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 24, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 27, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 27, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
mseri added a commit to mseri/opam-repository that referenced this pull request Oct 27, 2022
…p-mirage, cohttp-lwt, cohttp-lwt-unix, cohttp-lwt-jsoo, cohttp-eio, cohttp-curl, cohttp-curl-lwt, cohttp-curl-async, cohttp-bench and cohttp-async (6.0.0~alpha0)

CHANGES:

- cohttp-eio: ensure "Host" header is the first header in http client requests (bikallem mirage/ocaml-cohttp#939)
- cohttp-eio: add TE header in client. Check TE header is server (bikallem mirage/ocaml-cohttp#941)
- cohttp-eio: add User-Agent header to request from Client (bikallem mirage/ocaml-cohttp#940)
- cohttp-eio: add Content-Length header to request/response (bikallem mirage/ocaml-cohttp#929)
- cohttp-eio: add cohttp-eio client api - Cohttp_eio.Client (bikallem mirage/ocaml-cohttp#879)
- http: add requires_content_length function for requests and responses (bikallem mirage/ocaml-cohttp#879)
- cohttp-eio: use Eio.Buf_write and improve server API (talex5 mirage/ocaml-cohttp#887)
- cohttp-eio: update to Eio 0.3 (talex5 mirage/ocaml-cohttp#886)
- cohttp-eio: convert to Eio.Buf_read (talex5 mirage/ocaml-cohttp#882)
- cohttp lwt client: Connection cache and explicit pipelining (madroach mirage/ocaml-cohttp#853)
- http: add Http.Request.make and simplify Http.Response.make (bikallem mseri mirage/ocaml-cohttp#878)
- http: add pretty printer functions (bikallem mirage/ocaml-cohttp#880)
- New eio based client and server on top of the http library (bikallem mirage/ocaml-cohttp#857)
- New curl based clients (rgrinberg mirage/ocaml-cohttp#813)
  + cohttp-curl-lwt for an Lwt backend
  + cohttp-curl-async for an Async backend
- Completely new Parsing layers for servers (anuragsoni mirage/ocaml-cohttp#819)
  + Cohttp now uses an optimized parser for requests.
  + The new parser produces much less temporary buffers during read operations
    in servers.
- Faster header comparison (gasche mirage/ocaml-cohttp#818)
- Introduce http package containing common signatures and structures useful for
  compatibility with cohttp - and no dependencies (rgrinberg mirage/ocaml-cohttp#812)
- async(server): allow reading number of active connections (anuragsoni mirage/ocaml-cohttp#809)
- Various internal refactors (rgrinberg, mseri, mirage/ocaml-cohttp#802, mirage/ocaml-cohttp#812, mirage/ocaml-cohttp#820, mirage/ocaml-cohttp#800, mirage/ocaml-cohttp#799,
  mirage/ocaml-cohttp#797)
- http (all cohttp server backends): Consider the connection header in response
  in addition to the request when deciding on whether to keep a connection
  alive (anuragsoni, mirage/ocaml-cohttp#843)
  + The user provided Response can contain a connection header. That header
    will also be considered in addition to the connection header in requests
    when deciding whether to use keep-alive. This allows a handler to decide to
    close a connection even if the client requested a keep-alive in the
    request.
- async(server): allow creating a server without using conduit (anuragsoni mirage/ocaml-cohttp#839)
  + Add `Cohttp_async.Server.Expert.create` and
    `Cohttp_async.Server.Expert.create_with_response_action`that can be used to
    create a server without going through Conduit. This allows creating an
    async TCP server using the Tcp module from `Async_unix` and lets the user
    have more control over how the `Reader.t` and `Writer.t` are created.
- http(header): faster `to_lines` and `to_frames` implementation (mseri mirage/ocaml-cohttp#847)
- cohttp(cookies): use case-insensitive comparison to check for `set-cookies` (mseri mirage/ocaml-cohttp#858)
- New lwt based server implementation: cohttp-server-lwt-unix
  + This new implementation does not depend on conduit and has a simpler and
    more flexible API
- async: Adapt cohttp-curl-async to work with core_unix.
- *Breaking changes*
  + refactor: move opam metadata to dune-project (rgrinberg mirage/ocaml-cohttp#811)
  + refactor: deprecate Cohttp_async.Io (rgrinberg mirage/ocaml-cohttp#807)
  + fix: move more internals to Private (rgrinberg mirage/ocaml-cohttp#806)
  + fix: deprecate transfer encoding field (rgrinberg mirage/ocaml-cohttp#805)
  + refactor: deprecate Cohttp_async.Body_raw (rgrinberg mirage/ocaml-cohttp#804)
  + fix: deprecate more aliases (rgrinberg mirage/ocaml-cohttp#803)
  + refactor: deprecate connection value(rgrinberg mirage/ocaml-cohttp#798)
  + refactor: deprecate using attributes (rgrinberg mirage/ocaml-cohttp#796)
  + cleanup: remove cohttp-{curl,server}-async (rgrinberg mirage/ocaml-cohttp#904)
  + cleanup: remove cohttp-{curl,server,proxy}-lwt (rgrinberg mirage/ocaml-cohttp#904)
  + fix: all parsers now follow the spec and require `\r\n` endings.
    Previously, the `\r` was optional. (rgrinberg, mirage/ocaml-cohttp#921)
- `cohttp-lwt-jsoo`: do not instantiate `XMLHttpRequest` object on boot (mefyl mirage/ocaml-cohttp#922)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants