-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix failing unit tests that are commented out #10
Comments
well now, that explains why it wont close after not reading response .. the worker wont close |
something in worker_logic or Kill event not broadcasted properly or received. Or combination of all ... |
Looks like this is caused by |
@ReilySiegel its definitely hanging in the toList or toList_ code test_chunked_transfer_encoding : EitherT String IO ()
test_chunked_transfer_encoding = map_error show $ with_client {e=()} new_client_default $ \client => do
putStrLn "sending request stream"
(response, content) <- request client GET (url' "https://httpbin.org/stream/3") [] ()
putStrLn "response header received"
printLn response
-- (content, _) <- toList content
-- putStrLn "println content"
-- printLn $ utf8_pack $ content
putStrLn "closing client"
close client I can close the client fine with commenting out the toList and printing of content. But if I comment it in, the test hangs hang and the worker loop waits forever for a message. I sprinkled a bunch of log statements like a crazy person around the code. This is without the toList and putStrLn for the content
this is with the the lines commented in:
No structure on the log messages or anything, just throwing good old trusty "console logs" on the problem |
I will try to add some structure on the log messages to make this more readable, but it's definitely something related to the streaming code. especially the toList/toList_ functions, which make the test hang |
pushed the ugly println mess here #12 for now. Edit: closed it. Its there for reference. I need to find the idris2 way of debugging |
@ReilySiegel its actually hanging in the channel_to_stream code from Scheduler channel_to_stream : (HasIO m, Decompressor c) => c -> Channel (Either (HttpError e) (Maybe (List Bits8))) ->
Stream (Of Bits8) m (Either (HttpError e) ())
channel_to_stream decomp channel = do
putStrLn "channel_to_stream: 1"
Right (Just content) <- liftIO $ channelGet channel
| Right Nothing =>
case done decomp of
Right [] => pure (Right ())
Right xs => pure (Left $ DecompressionError "\{show $ length xs} leftover bytes in decompression buffer")
Left err => pure (Left $ DecompressionError err)
| Left err => pure (Left err)
putStrLn "channel_to_stream: 2"
let Right (content, decomp) = feed decomp content
| Left err => pure (Left $ DecompressionError err)
putStrLn "channel_to_stream: 3"
fromList_ content *> channel_to_stream decomp channel when i run module Test
import Data.Nat
import Network.HTTP
import Control.Monad.Error.Either
import Control.Monad.Error.Interface
ResultMonad : Type -> Type
ResultMonad = EitherT (HttpError ()) IO
getClient: IO (HttpClient ())
getClient = new_client certificate_ignore_check 1 1 False False
performRequest : HttpClient () -> ResultMonad (HttpResponse, Stream (Of Bits8) ResultMonad ())
performRequest client =
request client GET (url' "https://httpbin.org/stream/2") [("Authorization", "Bearer foo")] ()
silly_me = do
client <- getClient
Right (result, stream) <- runEitherT $ performRequest client
| Left e => printLn "Error from calling url"
putStrLn $ show result
Right (content, _) <- runEitherT $ toList stream
| Left e => printLn "Error from reading response stream"
putStrLn $ show content
putStrLn "Done"
close client i get
which means it hangs in the recursive function waiting for more data or something. So this channel_to_stream doesn't work for chunked response. |
i think this is the best debugging i have made so far on the chunked transfer encoding problem :) I have hit the nail on its head. Note: this happens when you call a streaming endpoint that returns ("Transfer-Encoding", "chunked") |
so conclusively it has nothing to do directly with the Streaming module. Its an issue with the Scheduler module |
example of chunked transfer encoding
seems we need to read the first bytes with a trailing \r\n, and use that as a indicator for how many bytes we expect to receive, recursively until we get 0\r\n. That shouldnt be too wildly difficult to implement i guess. The only problem is that transfer encoding can be both gzip and chunked. |
the example of chunked response encoding above is a red herring. see #15 instead for a work in progress to find the culprit |
I commented out two tests. Those needs to be fixed. Library is a bit pain to use if response must be read before connection is closed. Also chunked response encoding is also not working and it should work in http 1.1
The text was updated successfully, but these errors were encountered: