Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support concurent calls. #159

Open
crakjie opened this issue Jan 12, 2018 · 3 comments
Open

Support concurent calls. #159

crakjie opened this issue Jan 12, 2018 · 3 comments

Comments

@crakjie
Copy link

crakjie commented Jan 12, 2018

Actually, call multiple time the same key concurrently can make the cached block to be computed multiple time event for futures blocks.

val cpt = new java.util.concurrent.atomic.AtomicInteger()
def stringF() =  scala.concurrent.Future{ cpt.incrementAndGet(); "titi"}


import scala.concurrent.ExecutionContext.Implicits.global
import scalacache._ 
import scalacache.caffeine._
import scalacache.modes.scalaFuture._


 implicit val caffeineCache: Cache[String] = CaffeineCache[String]
//the dumbest way I know to make concurrent calls.
Seq.fill(1000)(0).par.map(_ => cachingF("Toto")(ttl = None)(stringF))


cpt.get  //return 1000

So there is 1000 call to the block, which says, the cache had no effect.

Some Scala implementation resolves this issue by caching the Future.
Like that every successive call returns a future which can wait for the cache caller.

I had also try with a Cache of Future

implicit val caffeineCache: Cache[scala.concurrent.Future[String]] = CaffeineCache[scala.concurrent.Future[String]]

same result, but if the mode is sync it's work.

edit :
I have made some try, a cache with a fixed F aka Cache[V, F[_]] may resolve the problem.
where the mode is given when the cache is instantiated.

@aliaksandrKachurka
Copy link

@crakjie there is a problem with caching Future: failed futures will be cached too, that is not what one expects. Any ideas on how to solve this?

@bzumhagen
Copy link

bzumhagen commented Jul 19, 2018

@aliaksandrKachurka, @crakjie
Hi, I've figured out a workaround for this, but ideally the solution would be implemented as part of memoize.

In caffeine you can accomplish this by doing cache.get(key, k -> value) to atomically compute and insert the value into the cache to avoid racing with other writes [source].

However under the hood scalacache-caffeine uses cache.getIfPresent which is racy.

You can work around it by doing this in scalacache.

val syncInt = AtomicInteger(0)
val syncCache: CaffeineCache[String] = CaffeineCache[String]

syncCache.underlying.get("getSyncInt", key => {
      Thread.sleep(500) // Threadsleep to simulate long running operation and allow for concurrency to rear it's ugly head
      Entry(syncInt.incrementAndGet(), None)))
    }).value

This results in a single increment of our atomic integer at any concurrency.

while this

memoizeSync(None) {
      Thread.sleep(500)
      syncInt.incrementAndGet()
}

Will be incremented a number of times equal to what your level of parallelism is.

If the scalacache-caffeine would be updated to use get(key, k -> v) instead of getIfPresent(key) then this issue should be resolved.

@tnielens
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants