-
Notifications
You must be signed in to change notification settings - Fork 622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closes: Feature: add ability to delete apps #119 #221
Conversation
Codecov Report
@@ Coverage Diff @@
## main #221 +/- ##
==========================================
+ Coverage 55.98% 56.07% +0.10%
==========================================
Files 74 74
Lines 3053 3089 +36
==========================================
+ Hits 1709 1732 +23
- Misses 1176 1186 +10
- Partials 168 171 +3
Continue to review full report at Codecov.
|
dfbafc9
to
e19f7cb
Compare
e19f7cb
to
8b8fe61
Compare
pkg/storage/cache/cache.go
Outdated
@@ -76,6 +76,24 @@ func (cache *Cache) Flush() { | |||
<-cache.cleanupDone | |||
} | |||
|
|||
func (cache *Cache) Delete(key string) { | |||
cache.Flush() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This causes an EvictionChannel
to be closed. How to remove the entries from cache in different way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flush
will clear the whole cache if I understand it correctly. I don't see a Delete
function in lfu-go which is a bummer. It should be easy to add though.
We can add it to our fork of lfu-go https://github.com/pyroscope-io/lfu-go The function should just do something like this:
c.lock.Lock()
defer c.lock.Unlock()
delete(c.values, key)
To make the project use our fork we'll need to make changes to go.mod the same way we use this revive fork:
https://github.com/pyroscope-io/pyroscope/blob/main/go.mod#L48
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allright, I have it done locally, but I need write access to lfu-go to commit it. I have updated cache.go, but I expect test to fail until the fork will be updated.
Edit:
I have used my own fork, and seems that it works, at least in tests.
go.mod
Outdated
@@ -46,3 +46,5 @@ require ( | |||
) | |||
|
|||
replace github.com/mgechev/revive v1.0.3 => github.com/pyroscope-io/revive v1.0.6-0.20210330033039-4a71146f9dc1 | |||
|
|||
replace github.com/dgrijalva/lfu-go => github.com/AdrK/lfu-go v1.0.0-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be replaced with proper fork
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
s.closingMutex.Lock() | ||
defer s.closingMutex.Unlock() | ||
|
||
if s.closing { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not related to the change itself but I'm wondering if we really need to keep closingMutex
acquired during the whole call. I think we can benefit from replacing the Mutex
with an RWMutex
, at least. Do other functions (apart from Close
) actually compete? If so, should we rename it accordingly (just m
/mu
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a very good point. The idea was to make sure we don't try to read / write after Close
is called.
But the way it is currently implemented, we basically have a lock on the whole db... I changed it to a RWMutex
.
Good news is that this means we'll get better performance. Bad news is that this might now expose some bugs that were hidden before because we had a lock on the whole db. Hopefully unit tests + running pyroscope server locally will expose those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's my biggest concern. We can mitigate the risk by using RLock
in Get
only.
Also, I think Close
should acquire the mutex for the whole call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Close acquires mutex for the whole call, then Get and Put will just block instead of returning errors... I think I'd rather them returning errors..
We can mitigate the risk by using RLock in Get only.
I'll do some tests today. One thing I can tell already is that if I benchmark it nothing breaks. That's good, but is not a 100% guarantee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering also about a situation when we remove from cache, but not yet from disk. If in the middle someone would call a read, cache would read from disk, reloading an item that was previously deleted. And at the end, we will in fact remove from disk, but we end up with not removed item in cache.
Ensure stacktraces parquet table is delta binary packed
No description provided.