Implement a Caching solution or at least a framework for one #532
Replies: 4 comments 1 reply
-
In the old Spring based Cwms Mobile I used ehcache with Spring's @Cacheable annotation. The old design had Controllers->Services->Daos->Database Having the different layers worked well with Spring b/c we could add features to the different layers. The controllers could deal with the url parameters and response formats. The services could combine data objects from different Daos and keep the interactions within the same transaction. The Dao were focused on orm and table specific queries. Cacheable was used on a few of the Controller methods and most of the Dao methods. Not saying it was perfect - just giving background. I don't see any mention of caching in the Javalin docs but Javalin is lighter weight than Spring was. Adding caching via an annotation was really clean and slick. Spring used those annotations to build caching proxy objects which did make the stacktraces huge. EHCache had some attractive sounding features. But we never got the memory and server config issues sorted out to really benefit from the available features. But it also doesn't have to be that complex - a Map with SoftReferences in the right place is all it should take to fix some queries. |
Beta Was this translation helpful? Give feedback.
-
yeah, offices, parameters, etc. the things that are quite static a combination of HashMap and cache-control headers will be simple add direct. The other case we'll have though it multiple instances. make sense for them to share a cache I think which necessitates a little complexity. |
Beta Was this translation helpful? Give feedback.
-
I use Redis personally for my own needs, and i'm actively trying to get our IT to install it on a RHEL box at our district so that I may use it internally for all the reasons I mention below. It also offers queuing and has many libraries to support your favorite language. I'm personally a fan of Redis because the learning curve seemed almost non-existent. Which is huge in the day where caching gets forgotten. Some of the other neat things you can do with a detached catching solution like Redis is have separate jobs that populate the tables. I use it to query third party API, store the results to a central Redis instance, then read Redis for those values instead of the API directly. This keeps the source for those API in their own scripts and allows me to use one Redis library for multiple API. A few other reasons to +1 Redis for caching needs are:
We got over a million hits in one day on our districts water control website alone in 2019. Eventually, it would be nice to pull data from RADAR for the websites directly, ruling out the need for backend scripts. Maybe i'm alone in that sentiment? If RADAR were to not have cache, then I suggest we run stress tests with nginx/apache from connections with large datapipes to test the max limit. Especially if the plan is to go Full RADAR. Finally, if hash-map/cache control does not meet the needs and we do decide to use a caching solution. I imagine this would be easy enough to setup in docker. The trick would be adding calls to redis for every API call of static content and setting the cache timeouts accordingly. |
Beta Was this translation helpful? Give feedback.
-
The queuing part of redis interesting me. We will need a queuing solution eventually to bridge users and the database and operations like comps in OpenDCS. much as I hate eggs in one basket solutions, it did seem lightweight enough to be useful. As for RADAR directly vs backend scripts. I don't think that's an either-or situation but those scripts going forward would need to use RADAR anyways so the load issue is still there. But it's sounding like a simple shim that the end points/daos use is in order and we can have a simple solution for the T7s and national instances now and go full whatever later. A few extra function calls in java is still faster than a round trip of a database query. I think the fun thing will be caching time series data. That may actually be better with an in-memory sqlite db just to be able to keep the structure... or, and there's nothing wrong with this, we're going to have some interesting keys. |
Beta Was this translation helpful? Give feedback.
-
While not quite as critical on our local installations. We a central instance and people generally requesting the same data quite often it makes sense to determine reasonable caching for each endpoint and and actually implement it.
Beyond actually doing something like Redis or memcached even just having appropriate cache-control headers and etags for things like the office and parameters endpoints would be beneficial, though could break their use for certain checks if the cache isn't ignored.
Thus this would likely also necessitate additional database checks.
Candidates are:
JVM memory makes the most sense for the local instances.
memcached or redis make the most sense for cloud (would be multiple instances running.)
Beta Was this translation helpful? Give feedback.
All reactions