Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make shields runnable on a PaaS #1848

Closed
7 of 8 tasks
paulmelnikow opened this issue Aug 5, 2018 · 9 comments
Closed
7 of 8 tasks

Make shields runnable on a PaaS #1848

paulmelnikow opened this issue Aug 5, 2018 · 9 comments
Labels
blocker PRs and epics which block other work core Server, BaseService, GitHub auth, Shared helpers operations Hosting, monitoring, and reliability for the production badge servers

Comments

@paulmelnikow
Copy link
Member

paulmelnikow commented Aug 5, 2018

In #1742 (comment) I’ve proposed that our production hosting move to Zeit Now. Whether or not this happens, I’d like to make sure Shields is fully capable of running in a PaaS. We’re most of the way there already. Shields doesn't have a lot of state. Since last year we’ve run our staging server on Heroku, and @platan recently ran a copy of Shields on Zeit for #1708.

We have a few bits in Shields which expect a persistent file system. Our production server relies on deploy commits. Deploy commits are possible with a PaaS, though they violate one of the 12-factor principles, and are more difficult to manage. Now supports secure, shared environment variables and I'd like to make it possible to use them.

There are three items that come to mind:

I’d like to set up each of these in a way that doesn’t interfere with the current usage. That way things are in good shape, however we decide to proceed.

@paulmelnikow paulmelnikow added the core Server, BaseService, GitHub auth, Shared helpers label Aug 5, 2018
@paulmelnikow
Copy link
Member Author

Compose, which I've used before, has Redis with persistence starting at $18.50/month. RedisLabs has it for $7/month, or if we want it highly available, $9/month. We have the budget for this.

Alternatively, given the nature of this data which could be infrequently read and written, we could use something like S3 or Google Cloud Storage, which would be much cheaper. We only need to write tokens when they change, and we could write stats once every 30 or 60 seconds.

paulmelnikow added a commit that referenced this issue Aug 9, 2018
This creates a new convenience class which consolidates all the Github initialization. It supports dependency injection and facilitates refactoring the persistence along the lines of #1205.

Also ref #1848
@paulmelnikow
Copy link
Member Author

I discussed a bit with @chris48s offline about the possibility of basing some of this using a storage abstraction layer which self-hosters and the production servers could choose. The most widely adopted of these I know of for Node is abstract-blob-store which is not really an abstraction layer but an interface. Still, I think it could be a good choice for certain kinds of caching.

For the github tokens it might be preferable to use a database that supports atomic inserts and deletes so we don't risk losing data based on the order of writes. I'll proceed with that, unless anyone else has thoughts?

paulmelnikow added a commit that referenced this issue Aug 19, 2018
Instead of saving tokens on a timer, save them when they change. Use EventEmitter to keep the components loosely coupled.

This is easier to reason about, much easier to test, and better supports adapting to backends which support atomic operations.

This replaces json-autosave, which was a bit difficult to read and also hard to test, with fsos, the lower-level utility it’s built on.

Ref: #1848
paulmelnikow added a commit that referenced this issue Aug 19, 2018
This is a fairly simple addition of a Redis-backed TokenPersistence. When GithubConstellation is initialized, it will create a FsTokenPersistence or a RedisTokenPersistence based on configuration. Have added tests of the Redis backend as an integration test, and ensured the server starts up correctly when a `REDIS_URL` is configured.

Ref: #1848
@espadrine
Copy link
Member

Something I always wanted to do with tokens was use a distributed convergent in-memory store; I started https://github.com/espadrine/jsonsync with that in mind.

@paulmelnikow
Copy link
Member Author

I like that interface a lot. It's similar to what I've done in #1939, which could easily be abstracted to an interface like this.

A centralized / hub-and-spoke setup is simple. Each server only needs to know about the hub. Existing servers don't need to be reconfigured or updated when additional servers come online.

Certainly P2P negotiation / discovery could be implemented, which could be an interesting engineering challenge, but not one that this project needs to solve.

It's simpler if the config can be specified in advance, immutably, avoiding extra state about other servers that needs to be kept up to date.

@paulmelnikow
Copy link
Member Author

See discussion in #2621 and implementation in #2626 for a config overhaul which addresses PaaS.

@paulmelnikow paulmelnikow added operations Hosting, monitoring, and reliability for the production badge servers blocker PRs and epics which block other work and removed operations Hosting, monitoring, and reliability for the production badge servers labels Jan 4, 2019
@paulmelnikow
Copy link
Member Author

This is getting really close! Analytics looks to be the last piece of code, and migrating some of the persistence into cloud Redis is the next step on the ops side.

@paulmelnikow paulmelnikow added the operations Hosting, monitoring, and reliability for the production badge servers label Jan 8, 2019
@paulmelnikow
Copy link
Member Author

Maybe we can drop the old analytics code after #3093 is done.

@timothyis
Copy link

@paulmelnikow if you have any questions about moving to Now, please don't hesitate to contact me or our team for any questions. 🙌

paulmelnikow added a commit that referenced this issue Mar 8, 2019
We're getting good results from #3093, so there's no reason to keep maintaining this code.

Ref #1848 #2068
@paulmelnikow
Copy link
Member Author

Looks like this is done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocker PRs and epics which block other work core Server, BaseService, GitHub auth, Shared helpers operations Hosting, monitoring, and reliability for the production badge servers
Projects
None yet
Development

No branches or pull requests

3 participants