Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix:add read mutex in gettyWSConn(websocket) struct to prevent data race in ReadMessage() #123

Merged
merged 2 commits into from
Jun 4, 2024

Conversation

No-SilverBullet
Copy link
Member

@No-SilverBullet No-SilverBullet commented Jun 3, 2024

What this PR does:

  1. rename "lock" to "writeLock" and add "readLock" in gettyWSConn
  2. use the "readLock" to wrap the websocket'sReadMessage function to be concurrency safe(
    similar to WriteMessage())

Which issue(s) this PR fixes:

NONE
Special notes for your reviewer:
NONE
Does this PR introduce a user-facing change?:

NONE

@No-SilverBullet
Copy link
Member Author

No-SilverBullet commented Jun 3, 2024

The concurrent writing problem mentioned in this issue(#119 )was fixed in the previous version(v1.4.12), this PR fixes the concurrent reading problem.All based on the belowing description:

"Connections support one concurrent reader and one concurrent writer.

Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently.

The Close and WriteControl methods can be called concurrently with all other methods."

-- from https://pkg.go.dev/github.com/gorilla/websocket#hdr-Concurrency

@AlexStocks AlexStocks merged commit a96fbf7 into apache:master Jun 4, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants