-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
access violation on disconnect #293
Comments
I've pushed an update to master that switches all uses of raw pointers sent into asio async calls to shared pointers. Can you confirm that this fixes any issues you are seeing? |
Hello Peter, Thanks a lot, On Thu, Sep 26, 2013 at 6:45 AM, Peter Thorson notifications@github.comwrote:
|
Hello Peter, Unfortunately we again experienced a crash. Application crashes if client forcibly closes connection in active reading The reason is that both async reading and async writing operations fail at OS: Windows Server 2008 R2 Logs: Exception: 0xC0000005: Access violation writing location 0x0000000000000024 Call stack: msvcr110.dll!memcpy() Line 358 Unknown
Thanks, On Thu, Sep 26, 2013 at 6:45 AM, Peter Thorson notifications@github.comwrote:
|
Can you describe your thread setup in a little more detail? There should be no way for the async read and async write handlers to be running concurrently. |
External io_service is used to share a common application thread pool. Could you please describe what should prevent both methods from Taras
|
The present asio transport isn't set up for use with thread pool based io_service. At present I recommend multiple endpoints, each with a single threaded io_service. I do intend to ship an asio transport update or variant that works with thread pools but need to do some more testing before I feel comfortable publishing that one. More information on the current state of thread safety can be found at http://www.zaphoyd.com/websocketpp/manual/reference/thread-safety |
Thanks for information. Unfortunately this seriously limits server abilities to process many One more thing that I do differently from samples is that I set callbacks Please consider usage of strand objects ( Thanks, On Thu, Oct 3, 2013 at 8:39 AM, Peter Thorson notifications@github.comwrote:
|
I've pushed an update that should eliminate some of the issues related to using io_service thread pools. Let me know if it helps with the crashing with multithreaded io_services. Please note a few things: While this means that it becomes safe to use multiple run methods on an io_service it does not mean that connections themselves are thread safe. It is still absolutely not safe to work with connection_ptr objects or call methods directly on connections outside of a that connection's handlers. To perform an action on a connection from another thread (for example, send a message from a worker thread, or send a message to connection B from a handler for connection A) you must use the wrapper methods that endpoints provide, such as endpoint::send. The overhead from the endpoint wrapper methods is very small. These are constant time lookups with very minimal locking. Using an io_service thread pool is not a substitute for pushing non-network related work into its own processing thread. If you are doing substantial work in handlers, especially work that is not algorithmically constant in time, io_service thread pools will not prevent unfair scheduling and blocked servers. Regarding setting callbacks directly on connections... this is already the default behavior of the library. When an endpoint creates a connection it initializes the connection settings with copies by value (including all handlers). Once a connection is created it has no further direct interactions with its endpoint. There are no shared data structures within the endpoint that connections compete for access to. |
Thank you Perer, this was really - really helpful. The reason I use connection directly for callbacks is less related to Using a single io_service thread pool for the main processing routine Thanks again, On Sun, Oct 6, 2013 at 2:43 PM, Peter Thorson notifications@github.comwrote:
|
Releases between this point, through 0.7 have greatly improved the safety of Asio's thread pool mode. Going to close this now. If there are new specific problems with Asio thread pool mode, please open as new issues. |
When tsl connection is used there could be a case when connection is destructed already while read operation is not finished yet.
The root cause looks to be in websocketpp\transport\asio\connection.hpp file where void async_read_at_least method uses plain this pointer to bind handle_async_read operation. There are a lot of such binds that looks potentially dangerous.
The text was updated successfully, but these errors were encountered: