Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: changed eventing configuration mutex to rwmutex and added missing lock #220

Merged
merged 2 commits into from
Dec 6, 2022

Conversation

skyerus
Copy link
Contributor

@skyerus skyerus commented Nov 23, 2022

This PR

Changed eventing configuration mutex to rwmutex and added missing lock.

Related Issues

Fixes #219

Notes

Follow-up Tasks

How to test

for {
select {
case <-time.After(20 * time.Second):
case <-ticker.C:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe these have a similar outcome due to the wrapping for loop. Original intention was to keep this as a timeout, as we don need to send a keep alive if either the request is closed, or we send a notification (this is to prevent read/write timeouts)

@toddbaert toddbaert self-requested a review November 23, 2022 13:35
@beeme1mr
Copy link
Member

beeme1mr commented Dec 1, 2022

@AlexsJones, could you please review this when you have a moment?

@AlexsJones
Copy link
Member

I might be wrong here but

	requestNotificationChan := make(chan Notification, 1)
	s.eventingConfiguration.mu.Lock()
	s.eventingConfiguration.subs[req] = requestNotificationChan
	s.eventingConfiguration.mu.Unlock()

If you have two goroutines calling the eventStream function the second will have it's write queued.
I think what it's missing is a check prior to the lock to see if requestNotificationChan := make(chan Notification, 1) has been set at all.

@skyerus
Copy link
Contributor Author

skyerus commented Dec 6, 2022

I'm not following the concern here, each invocation of EventStream() will lock while setting the channel to the service's state of ongoing connections.

I think what it's missing is a check prior to the lock to see if requestNotificationChan := make(chan Notification, 1) has been set at all.

Each EventStream() has its own channel, in what scenario would requestNotificationChan have failed to have been set?

…g lock

Signed-off-by: Skye Gill <gill.skye95@gmail.com>
Signed-off-by: Skye Gill <gill.skye95@gmail.com>
@skyerus
Copy link
Contributor Author

skyerus commented Dec 6, 2022

Is the concern that we're overwriting an already existing channel in the map?
The key is a pointer to the request structure, this is unique (at least until garbage collected).

@AlexsJones
Copy link
Member

I can't see any tangible benefits to this so I defer to others.

@skyerus
Copy link
Contributor Author

skyerus commented Dec 6, 2022

The benefit is that we're avoiding a potential race condition from writing to the same map concurrently.

@beeme1mr beeme1mr merged commit 5bbef9e into open-feature:main Dec 6, 2022
beeme1mr pushed a commit that referenced this pull request Jan 6, 2023
🤖 I have created a release *beep* *boop*
---


##
[0.3.0](v0.2.7...v0.3.0)
(2023-01-06)


### ⚠ BREAKING CHANGES

* consolidated configuration change events into one event
([#241](#241))

### Features

* consolidated configuration change events into one event
([#241](#241))
([f9684b8](f9684b8))
* support yaml evaluator
([#206](#206))
([2dbace5](2dbace5))


### Bug Fixes

* changed eventing configuration mutex to rwmutex and added missing lock
([#220](#220))
([5bbef9e](5bbef9e))
* omitempty targeting field in Flag structure
([#247](#247))
([3f406b5](3f406b5))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Missing lock in assignment of channel to an event stream request
5 participants