-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate GitHub tokens to cloud Redis #3332
Comments
Two options from #1848 (comment):
We could ask for sponsorship, though this isn't a lot of money and we do have it in our budget. |
I'd be good with either of those. I think Heroku offers Redis-as-a-service too |
Interesting! Heroku did agree to give Shields a credit, which we could use for our production hosting, though since Redis is a shareable add-on, we probably could use it for this as well. One concern is there doesn't seem to be an offline way to access the backup: https://devcenter.heroku.com/articles/heroku-redis#persistence |
I think that's only applicable to the hobby (free) tier though. I haven't looked at any of these providers in detail, but I believe any would be a solid choice. I've not used Compose myself before but i've heard a lot of good things, and you know RedisLabs will do Redis well 😄 If we end up using Heroku for production hosting then there's probably something to be said for using their Redis offering as well |
It does say:
Though it also says:
I'm fine with Compose or RedisLabs. I guess for half the price ($9 instead of $18.50), may as well try Compose? If we do move to Heroku someday maybe that would be a good time to migrate. I sent @espadrine an email asking if he objects to moving forward with this. |
Oops! I see what you meant now
SGTM |
$9/month is a good price. Just as another data point, AWS is slightly difficult to compare because they bill by the hour and there are different tiers and it depends what region you're running it in and... stuff, but I'm currently running a IMO, this is wise use of the budget because as it helps the move to a hosting model that will allow us to reduce the project's bus factor for deployment. |
I am +1000 on anything that helps us with that 😄 |
Thought I'd revive this! Back in April Thaddée responded offline. He mentioned three things. (1) He was concerned about latency and suggested having one in Canada and the other in France. Since the goal is to synchronize data between servers having two of them won't work. I'm not super-concerned about latency, since the initial tokens are loaded asynchronously after the server starts up, and after that asynchronously (and outside of any request) when something changes. As far as I can reason, it's not a problem if this is a little slow. Line 327 in aa135d3
shields/services/github/github-constellation.js Lines 53 to 79 in aa135d3
(2) He also mentioned:
👍 It looks like Compose supports SSL and makes it possible to turn off non-SSL connections, so I will give that a shot. (3) Finally, he mentioned jsonsync, a library created to synchronize content like this using CRDT. jsonsync isn't actively maintained and I'd prefer to move forward with the Redis solution for reasons mentioned here. |
- Move to private - Fix validation - Don't log the URL Ref #3332
With the fix from #3707, this is now live and well on s0! |
Live on all three! |
This was used for initially loading the uniqified GitHub tokens into Redis (#3332), but is no longer needed.
As discussed in #1848, let's migrate the Shields runtime data store from the server file system to cloud Redis.
This centralized / hub-and-spoke architecture is simpler and easier-to-reason about than today's peer-to-peer setup, which requires each each server to know about all of the other servers. Bringing a server online requires reconfiguring all of them.
Switching to cloud Redis also makes it possible to move to a PaaS.
A reason to do this now is that we may need to add token rotation to support Gitlab (#541 (comment)). It would reduce the complexity of supporting Gitlab if we make the migration first.
The code is already in place for this (see #1939, #1906, and #1848), so the only thing left is to operationalize this:
The text was updated successfully, but these errors were encountered: