-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should I Ignore k_ESteamNetworkingSocketsDebugOutputType_Bug ? #193
Comments
If you can reproduce it, can you debug it? Just put a breakpoint on that assert. Then, instead of breaking the thinker loop, set next statement and let the straggler try to think one more time, and see who it is and what they are doing. |
(And BTW, that assert is probably an indication of a bug that is worth tracking down and fixing. If you are able to reproduce it and can give me some info, I'd appreciate it.) |
I am using latest release version (1.30) further updates may have fixed this. Here is the code of chat client/server which was modified to create spammer clients :/ |
The newest code only uses one wait object, regardless of the number of sockets, so the 63 socket limit should be fixed. (5b70535) Also, are you perhaps simulating fake lag? There was a bug with the fake lag code, but I think it might have been introduced and then fixed (d515f45) since 1.30. If you reproduce a deadlock, can you see what the main thread is doing? Would love to see a call stack from the main branch. I recently implemented a P2P stress test, because we had a partner making hundreds of P2P connections (which are more expensive than UDP connections, in general). It did something very similar to what you are doing -- made a bunch of connections at a relatively high rate and watched for any assertions or warnings about long lock times, etc. This led to some optimizations. I think most of the problems I fixed were specific to P2P code, but there might have been some general fixes. I would be interested to know if the problems you are seeing are fixed with the latest code in the main branch. |
No, thats not it (I tested latest version just to make sure...). Call to s_queueThinkers's item-remove function is only called when |
OK, thanks. Well if you are able to tell me what the other threads is doing (call stack) I'll bet we can get it figured out. Also I can try to apply your changes and run your test. If you want to turn your change into a more explicit unit test (add some command line parameters or maybe even just an #ifdef that could be compiled to a separate executable) and submit a PR, then I can reproduce it and add it to the CI configuration. I understand if that's more work than you want to apply to this. But if you are interested, I'd definitely appreciate the effort. In any case I'll try to help get this fixed if you can give me some more info. |
This is what main thread is doing Comment clearly says what it should do: "Reschedule him for 1ms in the future", it just doesn't |
Just now getting back to this bug. I don't support you have a test harness or other script I can run to reproduce the problem? |
It's not easy to reproduce. Best I could come up with is this msvc project. Run server from VS and then run clients from batch script for about 10 times, and after successfull ~600 connections close all clients. Wait for few seconds for timeout and then it should happen. (GNS is installed from vcpkg) |
I run example chat program with many client connections at same time and when i was terminating all clients at once server was failing asserts with debug outputs with type of k_ESteamNetworkingSocketsDebugOutputType_Bug.
AssertMsg1( false, "Processed thinkers %d times -- probably one thinker keeps requesting an immediate wakeup call.", nIterations );
And
AssertMsg( len( m_vecPendingCallbacks ) < 100, "Callbacks backing up and not being checked. Need to check them more frequently!" );
These two asserts were failing for me. Do i need to ignore them or should I terminate server ? Can i fix those issues ?
The text was updated successfully, but these errors were encountered: