Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ref: Stop building local images for Sentry services #834

Merged
merged 19 commits into from
Feb 4, 2021

Conversation

BYK
Copy link
Member

@BYK BYK commented Jan 20, 2021

We used to build local images for Sentry services to be able to
include required plugins in the image. With this change we instead
do this in a custom entrypoint script and use the volume /data
to store the plugins permanently.

This should resolve many issues people have around building local
images and pushing them to places like private repositories or swarm
clusters.

This is not 100% compatible with the old way but it should still be
a mostly transparent change to many folks.

We used to build local images for Sentry services to be able to
include required plugins in the image. With this change we instead
do this in a custom entrypoint script and use the volume `/data`
to store the plugins permanently.

This should resolve many issues people have around building local
images and pushing them to places like private repositories or swarm
clusters.

This is not 100% compatible with the old way but it should still be
a mostly transparent change to many folks.
@BYK
Copy link
Member Author

BYK commented Jan 20, 2021

I'm planning to add sentry-auth-oidc plugin to requirements.txt in tests and ensure it is installed properly to test this functionality.

@mattrobenolt
Copy link
Contributor

So on every start of every container, we're gonna pip install stuff? Sounds like a very less than ideal solution for folks that want to install plugins.

Why don't we use ONBUILD instructions like I used to or something similar.

Is there precedent anywhere in the community to do this? I know I, as a user, would be very unhappy with this.

@chadwhitacre
Copy link
Member

Whichever route we choose, should we document it under https://develop.sentry.dev/self-hosted/? We could even use that to drive conversation with the community. Write a doc for the current situation, put up a proposal for a change and get feedback.

@BYK
Copy link
Member Author

BYK commented Jan 20, 2021

@mattrobenolt

So on every start of every container, we're gonna pip install stuff? Sounds like a very less than ideal solution for folks that want to install plugins.

It is not as pip skips installation if the plugins are already installed. We can persist the plugins through the volume so the startup overhead is minimal ATM. I haven't thought about the need to potentially ping PIP servers though, would probably need to find a solution to that too.

Why don't we use ONBUILD instructions like I used to or something similar.

The aim here is to remove the need to build and maintain custom local images. This does not provide us that.

Is there precedent anywhere in the community to do this? I know I, as a user, would be very unhappy with this.

Why would you be unhappy? Which part is the problem for you as a user?

@mattrobenolt
Copy link
Contributor

If I were administering this in any production environment, this would be extremely cumbersome and slow even with volumes. I'd just want to build an image.

So I'm a strong -1, but y'all do what you want. I don't even understand what problem this solves. I don't see how it solves anything except make it objectively a worse experience.

@BYK
Copy link
Member Author

BYK commented Jan 20, 2021

this would be extremely cumbersome and slow even with volumes. I'd just want to build an image.

Have you actually tested this and saw the results?

I don't see how it solves anything except make it objectively a worse experience.

You'd need to define "objectively" for this to apply. I added you to get your feedback and I understand you are against this change but I'm not sure if you are really looking into it rather than categorically rejecting something.

@mattrobenolt
Copy link
Contributor

It's slower because objectively you're doing more work on boot up for every single container, even if the goal is mostly a no-op, pip isn't the fastest thing.

Managing a volume for each container is a mess in any automated environment. For example, in Kubernetes, you can no longer use a Deployment, you now need to use a StatefulSet which comes with a bunch of other caveats. Managing volumes is not trivial just for something like this.

@BYK
Copy link
Member Author

BYK commented Jan 20, 2021

It's slower because objectively you're doing more work on boot up for every single container, even if the goal is mostly a no-op, pip isn't the fastest thing.

I'm going to skip running pip based on a file hash.

Managing a volume for each container is a mess in any automated environment. For example, in Kubernetes, you can no longer use a Deployment, you now need to use a StatefulSet which comes with a bunch of other caveats. Managing volumes is not trivial just for something like this.

This is using an already required/mounted volume so it should not make anything harder. Also our intended audience is not K8S users.

@mattrobenolt
Copy link
Contributor

Ok.

Copy link
Member

@chadwhitacre chadwhitacre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally, seems to work. 👍

arusa and others added 5 commits February 1, 2021 22:48
On machines with 4gb the available memory is often a little bit lower than 4000
Fixes the issue where we set an invalid option, `github-app.extended-permissions`, instead of the correct one, `github-login.extended-permissions`. Some people mentioned this warning earlier but never clearly enough to point that it was coming from our default settings suggestions.
* ref(requirements): Add min CPU requirement, relax soft RAM

Adds minimum of 4 CPU cores requirement as anything below will perform quite poorly even on lower loads. Relaxes the soft RAM requirement from 8000 MB to 7800 MB as even when there is 8 GB RAM installed, the system reserves some of it to itself and under reports the amount.

* pass on CI with soft limit
@BYK BYK enabled auto-merge (squash) February 4, 2021 12:09
@BYK BYK merged commit a1c0c1f into master Feb 4, 2021
@BYK BYK deleted the byk/ref/no-local-sentry-builds branch February 4, 2021 12:16
@djmaze
Copy link

djmaze commented Feb 4, 2021

(Deleted my last post because it was not fully correct.)

Thanks for this change. Now the cron images for snuba and symbolicator could also be prebuilt and put on the hub, right? Then we would need no more self-built images.

@BYK
Copy link
Member Author

BYK commented Feb 4, 2021

Thanks for this change. Now the cron images for snuba and symbolicator could also be prebuilt and put on the hub, right? Then we would need no more self-built images.

Exactly, and that's the plan. No promises for a timeline though.

@olexiyb
Copy link

olexiyb commented Feb 15, 2021

I'm planning to add sentry-auth-oidc plugin to requirements.txt in tests and ensure it is installed properly to test this functionality.

Try to test with add sentry-ldap-auth. You will notice that you need to install gcc and more packages to install this plugin.
As a result of your change, I see gcc installation for many services, just because all of them using the same entrypoint.

@BYK
Copy link
Member Author

BYK commented Feb 15, 2021

As a result of your change, I see gcc installation for many services, just because all of them using the same entrypoint.

@olexiyb Hmm, I'll see what we can do about this. Thanks for raising the issue. Would you mind filing a new ticket with a bit more details for easier reproduction? (also for other people who may be experiencing the same issue, for tracking purposes?)

@github-actions github-actions bot locked and limited conversation to collaborators Mar 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants