Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Net::HTTP::Persistent::Error: too many connection resets #37

Open
treeder opened this issue Mar 12, 2013 · 26 comments
Open

Net::HTTP::Persistent::Error: too many connection resets #37

treeder opened this issue Mar 12, 2013 · 26 comments
Labels

Comments

@treeder
Copy link

treeder commented Mar 12, 2013

This was reported a while back, but it was closed so making a new one: #19

We get this error a LOT and from searching around the web, it seems other people get it too and they all seem to trace back to net-http-persistent (http://goo.gl/c1Qes). We recently had a user who was getting this pretty consistently, and easily, from a heroku app so we had him try changing the backing gem to rest-client and typhoeus and the problem disappeared using either of those. All our gems are using the 'rest' gem (https://github.com/iron-io/rest) so we just had the user change the backing gem like so:

    Rest::Client.new(:gem=>:typhoeus)

Here are some examples of what we're seeing on a regular basis, all of these from today:

Net::HTTP::Persistent::Error: too many connection resets (due to end of file reached - EOFError) after 0 requests on 73380140, last used 1363039742.9996092 seconds ago

Net::HTTP::Persistent::Error: too many connection resets (due to end of file reached - EOFError) after 0 requests on 82025000, last used 1363039458.9997427 seconds ago

Net::HTTP::Persistent::Error: too many connection resets (due to end of file reached - EOFError) after 1 requests on 28986080, last used 59.305080555 seconds ago -- ["/usr/lib/ruby/gems/1.9.1/gems/net-http-persistent-2.8/lib/net/http/persistent.rb:959:in 

too many connection resets (due to end of file reached - EOFError) after 16 requests on 34995740, last used 0.059532367 seconds ago

And so on. All on the same line it appears:

/usr/lib/ruby/gems/1.9.1/gems/net-http-persistent-2.8/lib/net/http/persistent.rb:959:in `rescue in request'
/usr/lib/ruby/gems/1.9.1/gems/net-http-persistent-2.8/lib/net/http/persistent.rb:968:in `request'

It would be great if we could find a fix because I really like your library, it performs almost as good as typhoeus with no binary dependency which is great.

@featalion
Copy link

Hey guys, we found the error. It is depend on version of Ruby. So, please close the issue.

Sorry to disturb you.

@mikerudolph
Copy link

@featalion which version of Ruby did you update to for fixing this issue? I am seeing a similar failure due to

too many connection resets (due to Timeout::Error - Timeout::Error) after 0 requests on 23848520, last used 1365149977.7872036 seconds ago

@featalion
Copy link

@mikerudolph , the oldest version I tested is 1.9.3-p327. But possible earlier, like p192, works too.

@rehevkor5
Copy link

@featalion I just tested my code which exhibits this problem on ruby 1.9.3p392 (2013-02-22 revision 39386) [x86_64-linux] on an Amazon EC2 instance, and I still get the problem.

Do you know what specific change in Ruby you thought resolved this issue?

@featalion
Copy link

As I remember, changes in Ruby SSL (non-block reading or so, C code). But I'm not pretty sure it works for any use cases. I tested linux/darwin x64 MRI Rubies and in my cases issues disappear at p327. Also tested p392 and it was OK.
For us the issue was that Ruby does not work with SSL socket correctly while reading/writing.
It is not improbable that still appears for some of your use cases.

@dekz
Copy link

dekz commented May 13, 2013

Evident for me on linux-x86_64 1.9.3-p327, typing this error (unable to copy). On net-http-persistent 2.7, and 2.8.

lib/net/http/persistent.rb:959:in 'rescue in request': too many connection resets (due to SSL_read:: wrong version number - OpenSSL::SSL::SSLError) after 4 requests on 10261120, last used 0.008279738 seconds ago (Net::HTTP::Persistent::Error)

It worked for some time in a multithreaded test case, then the error appeared after a minute. Second run happened after quite quicker.

@featalion
Copy link

I'm not sure it is thread-safe. For example, program opens one socket and N threads reading/writing at the same time.

@drbrain
Copy link
Owner

drbrain commented May 13, 2013

net-http-persistent opens a new HTTP socket per thread.

Is there a good way for me to reproduce this? I think I will need both the ruby version and the OpenSSL version along with a test script to reproduce it.

@featalion
Copy link

In my case net-http-persistent did not read/write socket (1.9.3-p0). @dekz is it possible remote end respond with bad data? But I'm still interesting in this because it possibly will happen again on our service. Will help test if you'll provide all stuff @drbrain asked for.

@mislav
Copy link
Contributor

mislav commented Jul 25, 2013

I was bitten by the same issue as @mikerudolph above:

too many connection resets (due to Timeout::Error - Timeout::Error) after 0 requests

This is an issue separate from EOFError and SSLError mentioned above, and is easily reproducible. It will occur if the connection was configured with a read timeout and the server didn't reply in a timely fashion. Net::HTTP is going to raise a Timeout::Error and net-http-persistent handles this, among other errors, by resetting the connection, retrying, and ultimately aborting with the misleading message quoted above.

I fixed this for myself by monkeypatching net-http-persistent from master to not handle Timeout::Error. In the case of a server taking too long to generate a response, I don't think resetting the connection does us any good since it wasn't a connection problem to begin with. It would be much better to leave the original Timeout::Error unhandled and let the user retry the request if they will.

@drbrain drbrain added the Support label Feb 5, 2014
@ifduyue
Copy link

ifduyue commented May 19, 2014

I also encountered this today:

#<Net::HTTP::Persistent::Error: too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) after 1784 requests on 2561033320, last used 244.59479 seconds ago>

ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-darwin12.5.0]
net-http-persistent (2.9.4)

@mislav
Copy link
Contributor

mislav commented May 20, 2014

@ifduyue Yours isn't the same issue, as it seems you're getting valid "too many connection resets" errors due to read timeouts. It's probably that you're not giving enough time for the remote server to respond. Try increasing the timeout setting.

@ifduyue
Copy link

ifduyue commented May 20, 2014

@mislav thx, I'll try.

@hut8
Copy link

hut8 commented May 27, 2014

I'm getting this issue with 2.0.0.-p353, and have been trying to debug it for quite a while. I increased my timeout to 600 seconds. That should be way more than enough for the request I'm taking (when doing this in the browser it takes < 10 seconds -- long, but not ten minutes) I don't think I have done 10582 requests; I believe I've only done about 8688 at this point in the program (unless the difference is made by resetting the connection before this via idle timeouts).

I'm using the same connection object because it's wrapped in a Mechanize object, but each request should be almost instantaneous (the only processing I do to the result is checking the status) -- so I'm not hitting an idle timeout, right?

What does connection resets even mean? Is that when a Net::HTTP::Persistent connection makes a new underlying connection? Why is EOFError appearing here? It seems like this should be throwing a timeout error, since 600 seconds is my read_timeout

home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/openssl/buffering.rb:174:in `sysread_nonblock': too many connection resets (due to end of file reached - EOFError) after 10582 requests on 21232720, last used 600.798127325 seconds ago (Net::HTTP::Persistent::Error)
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/openssl/buffering.rb:174:in `read_nonblock'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/protocol.rb:153:in `rbuf_fill'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/protocol.rb:134:in `readuntil'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/protocol.rb:144:in `readline'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http/response.rb:39:in `read_status_line'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http/response.rb:28:in `read_new'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http.rb:1406:in `block in transport_request'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http.rb:1403:in `catch'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http.rb:1403:in `transport_request'
        from /home/lbowen/.rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/net/http.rb:1376:in `request'
        from /home/lbowen/.rvm/gems/ruby-2.0.0-p353/gems/net-http-persistent-2.9/lib/net/http/persistent.rb:986:in `request'
        from /home/lbowen/.rvm/gems/ruby-2.0.0-p353/gems/mechanize-2.7.3/lib/mechanize/http/agent.rb:259:in `fetch'
        from /home/lbowen/.rvm/gems/ruby-2.0.0-p353/gems/mechanize-2.7.3/lib/mechanize.rb:440:in `get'
        from ./log-replay.rb:157:in `block in replay_log' # <-- that's the end of my code

@developerinlondon
Copy link

I have this issue too coming quite frequently..

/usr/share/ruby/2.0/net/protocol.rb:158 in "rescue in rbuf_fill"
/usr/share/ruby/2.0/net/protocol.rb:152 in "rbuf_fill"
/usr/share/ruby/2.0/net/protocol.rb:134 in "readuntil"
/usr/share/ruby/2.0/net/protocol.rb:144 in "readline"
/usr/share/ruby/2.0/net/http/response.rb:39 in "read_status_line"
/usr/share/ruby/2.0/net/http/response.rb:28 in "read_new"
/usr/share/ruby/2.0/net/http.rb:1406 in "block in transport_request"
/usr/share/ruby/2.0/net/http.rb:1403 in "catch"
/usr/share/ruby/2.0/net/http.rb:1403 in "transport_request"
/usr/share/ruby/2.0/net/http.rb:1376 in "request"
/gems/net-http-persistent-2.9.4/lib/net/http/persistent.rb:999 in "request"
/gems/mechanize-2.7.2/lib/mechanize/http/agent.rb:257 in "fetch"
/gems/mechanize-2.7.2/lib/mechanize/http/agent.rb:974 in "response_redirect"
/gems/mechanize-2.7.2/lib/mechanize/http/agent.rb:298 in "fetch"
/gems/mechanize-2.7.2/lib/mechanize.rb:432 in "get"
/lib/rorschach/http_client.rb:22 in "get"

Error is:

Net::HTTP::Persistent::Error: too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) after 0 requests on 70229092501140, last used 1405653321.0926156 seconds ago

are you guys still having this issue? i cant totally understand what the actual problem is here...

@developerinlondon
Copy link

I think it might be due to the 'note' here about it not working for ruby 1.9+ here - https://github.com/drbrain/net-http-persistent/blob/master/lib/net/http/persistent.rb#L241
The MRI library doesnt seem to support the way to do real-timeouts properly.. any suggestions on how to fix this?

@subvertallchris
Copy link

Is that comment about trouble with 1.9+ still accurate? It was added to that file in June of 2012.

@drbrain
Copy link
Owner

drbrain commented Nov 19, 2014

The concern with the comment is that newer versions of Net::HTTP reconnect automatically. I should rewrite this using TCPSocket only

@tilo
Copy link

tilo commented Sep 18, 2015

@drbrain encountering the same error when using Gem-in-a-box
geminabox/geminabox#211

@brucek
Copy link

brucek commented Nov 16, 2016

I see this issue very reliably when using neo4j-core 6.1.5, neo4j 7.2.3 and resque-pool 0.6.0, with a worker pool of 4.

In my Rails 4.2 project, I have a resque.rake file with:

require 'resque/tasks'
require 'resque/pool/tasks'

task "resque:setup" => [:environment]

task "resque:pool:setup" do
  # close any sockets or files in pool manager
  Neo4j::Session.current.close
  cfg = Rails.application.config.neo4j

  # and re-open them in the resque worker parent
  Resque::Pool.after_prefork do |job|
    $redis.client.reconnect
    Neo4j::Session.open(cfg.session_type, cfg.session_path, cfg.session_options.merge( { name: "neo4j_#{Process.pid}",
                                                                                         default: true }))

    puts "#{__FILE__}:#{__LINE__}"
    p Neo4j::Session.current
    # Category is a neo4j Node class
    p Category.all.result.count

    # Cleanup just in case
    at_exit do
      Neo4j::Session.current.close
    end
  end
end

When I switch the neo4j Faraday adapter to Typhoeus, the problem instantly vanishes.

@brucek
Copy link

brucek commented Nov 16, 2016

For completeness, errors I see look like this (a bunch removed - almost all are the "too many bad responses" errors):

/Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_response.rb:188:in `[]': no implicit conversion of Symbol into Integer (TypeError)
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_response.rb:188:in `set_data'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_response.rb:221:in `create_with_no_tx'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_session.rb:218:in `block in _query'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/activesupport-4.2.7.1/lib/active_support/notifications.rb:164:in `block in instrument'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/activesupport-4.2.7.1/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/activesupport-4.2.7.1/lib/active_support/notifications.rb:164:in `instrument'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_session.rb:211:in `_query'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-core/query.rb:244:in `response'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-core/query.rb:308:in `pluck'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-7.2.3/lib/neo4j/active_node/query/query_proxy_enumerable.rb:85:in `pluck'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-7.2.3/lib/neo4j/active_node/query/query_proxy_enumerable.rb:24:in `result'
    from /usr/local/share/workspace/havn/lib/tasks/resque.rake:20:in `block (2 levels) in <top (required)>'
    from (eval):12:in `block in call_after_prefork!'
    from (eval):11:in `each'
    from (eval):11:in `call_after_prefork!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:415:in `block in spawn_worker!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:411:in `fork'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:411:in `spawn_worker!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:389:in `block in spawn_missing_workers_for'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:388:in `times'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:388:in `spawn_missing_workers_for'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:374:in `block in maintain_worker_count'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:372:in `each'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:372:in `maintain_worker_count'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:284:in `start'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:109:in `run'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/tasks.rb:17:in `block (2 levels) in <top (required)>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:248:in `block in execute'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:243:in `each'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:243:in `execute'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:187:in `block in invoke_with_call_chain'
    from /Users/bruce/.rvm/rubies/ruby-2.3.1/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:180:in `invoke_with_call_chain'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:173:in `invoke'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/cli.rb:185:in `start_pool'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/cli.rb:21:in `run'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/bin/resque-pool:7:in `<top (required)>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/resque-pool:23:in `load'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/resque-pool:23:in `<main>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/ruby_executable_hooks:15:in `eval'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/ruby_executable_hooks:15:in `<main>'
/Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/net-http-persistent-2.9.4/lib/net/http/persistent.rb:1012:in `rescue in request': too many bad responses after 39 requests on 70214162798040, last used 1.181841 seconds ago (Net::HTTP::Persistent::Error)
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/net-http-persistent-2.9.4/lib/net/http/persistent.rb:1038:in `request'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/adapter/net_http_persistent.rb:25:in `perform_request'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:40:in `block in call'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/adapter/net_http_persistent.rb:21:in `with_net_http_connection'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:32:in `call'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday_middleware-0.10.1/lib/faraday_middleware/response_middleware.rb:30:in `call'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday_middleware-0.10.1/lib/faraday_middleware/request/encode_json.rb:23:in `call'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in `build_response'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in `run_request'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/faraday-0.9.2/lib/faraday/connection.rb:140:in `get'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_session.rb:50:in `open'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j-server/cypher_session.rb:6:in `block in <module:Server>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j/session.rb:125:in `create_session'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/neo4j-core-6.1.5/lib/neo4j/session.rb:110:in `open'
    from /usr/local/share/workspace/havn/lib/tasks/resque.rake:14:in `block (2 levels) in <top (required)>'
    from (eval):12:in `block in call_after_prefork!'
    from (eval):11:in `each'
    from (eval):11:in `call_after_prefork!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:415:in `block in spawn_worker!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:411:in `fork'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:411:in `spawn_worker!'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:389:in `block in spawn_missing_workers_for'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:388:in `times'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:388:in `spawn_missing_workers_for'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:374:in `block in maintain_worker_count'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:372:in `each'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:372:in `maintain_worker_count'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:307:in `block in join'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:301:in `loop'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:301:in `join'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool.rb:109:in `run'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/tasks.rb:17:in `block (2 levels) in <top (required)>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:248:in `block in execute'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:243:in `each'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:243:in `execute'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:187:in `block in invoke_with_call_chain'
    from /Users/bruce/.rvm/rubies/ruby-2.3.1/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:180:in `invoke_with_call_chain'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/gems/rake-11.3.0/lib/rake/task.rb:173:in `invoke'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/cli.rb:185:in `start_pool'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/lib/resque/pool/cli.rb:21:in `run'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bundler/gems/resque-pool-2eb1944741be/bin/resque-pool:7:in `<top (required)>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/resque-pool:23:in `load'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/resque-pool:23:in `<main>'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/ruby_executable_hooks:15:in `eval'
    from /Users/bruce/.rvm/gems/ruby-2.3.1@havn/bin/ruby_executable_hooks:15:in `<main>'

@MatzFan
Copy link

MatzFan commented Jan 20, 2017

Just a ping to see if any progress was being made on this issue? I am seeing the following with a GET request using Mechanize via Tor while trying to get a new identity by restarting Tor service.

 Net::HTTP::Persistent::Error:
       too many connection resets (due to Connection reset by peer - Errno::ECONNRESET) after 1 requests on 47284442234660, last used 0.081786187 seconds ago

A flaky sleep 1 after Tor restart seems to fix it, so I may try an alternative workaround

@drbrain
Copy link
Owner

drbrain commented Jan 20, 2017

@MatzFan this is always due to network issues so you'll need to perform workarounds outside this library

@MatzFan
Copy link

MatzFan commented Jan 20, 2017

Understood - thanks @drbrain

@chewi
Copy link

chewi commented Jan 30, 2018

No one has yet commented on the fact that the "last used seconds ago" value is sometimes ludicrously high. While writing this, it suddenly twigged that these values point to the UNIX epoch, 1970-01-01 00:00:00! Not a real problem but you might want to tweak that message. 😁

@grosser
Copy link
Contributor

grosser commented May 2, 2019

possibly fixed by removing retry logic, please comment on #100 if you are interested

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.