-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Constrained environments #343
Comments
Yes, I understand... I had an Do you have recurring tasks configured at all? You could skip the scheduler if not. Another question: are you starting Solid Queue via I'll see if I get to bring back the |
I do have recurring tasks.
For what it's worth, this was a very good call. That being said, per the readme, the dispatcher really isn't doing that much anymore. Unless you got big plans for the dispatcher, it seems the Supervisor has the maintenance task thread (or a second thread) that could be doing what's left of the dispatchers job. |
I think it depends on your setup 🤔 If you use delayed jobs, then the dispatcher will be making sure they get dispatched. This also applies to jobs automatically retried via Active Job with delay (the default). We do use them heavily and run several of them separate from workers, but perhaps you don't? In that case, would it help to just not run the dispatcher? You can achieve that by not configuring it at all, but I imagine you do need it in some cases 🤔 Although I imagine you need the concurrency maintenance task 😬 I think the async mode was a good idea for this case, TBH. |
I'm starting to worry this last exchange is falling further and further into the "I didn't fully understand" side of things, again :-( I have to admit to being a bit flummoxed between the complexity required for a competent Async Job subsystem and the desire to fit things into the itty bitty tiny box of small scale and affordable cloud based deployments. Given where SolidQueue currently sits with memory utilization, I'm going to have good think on the trade-offs between running just 1 worker and assuming it's going to recycle (on OOM) for almost every execution Vs. just facing I have to push that memory dial to the right and eat the bill 😢 |
So sorry, this is my fault. I call it Are you starting Solid Queue via |
That one I actually understood. Adding complexity to the code so it can be configured to run both ways seems like a big lift. I know you had it before, but every line of code in SolidQueue is a line that has to be supported, tested, and will eventually used in an unexpected way. I would guess Threads are ok for very lite / IO intensive work loads, but given the GVL I simply don't understand where the tradeoffs are on Threads Vs. Processes. I shouldn't have started this conversation without a better understanding.
I've switched to bin/jobs. I can't thank you enough for being willing to engage, and tolerate / put up with my learning curve on some of these issues. |
Oh, no, no please, it's me who should thank you for your patience and help to make Solid Queue better! 🙏 ❤️ The reason I asked about |
I'll look into the Eager Vs. Lazy tradeoffs. Thanks for that. Once I get worker recycling working / finished, I'll have more suggestions to share that have helped. For example, SolidQueue.on_start { GC.auto_compact = true } helps and is shared between forks. |
There's also |
Oh that looks interesting! Thank you. |
What's the easiest way to do this? Would an empty |
@hms If you're running in a super constrained environment, you could just use the puma plugin that we use in development. Then everything is running off that single Rails process. Just make sure you keep WEB_CONCURRENCY = 1. Another option is to stop getting fleeced by cloud providers charging ridiculous prices for tiny hosts 😄. Rails 8 is actually about answering that question in the broad sense. |
@majkelcc yes! If you have no recurring tasks defined at all (the default), then the scheduler will be automatically disabled. Alternatively, if you're running jobs in more than one place and want to disable it in one of them (this is what we do in HEY), then you can pass |
Oh, how I hate Heroku and the games they play with radically inappropriate "starter" resource sizing (you think Apple is bad) all in an effort to prop up their already overly insanely high prices to force me into upgrades. (does that make it "Insanely high(2) prices?". And yes, I'm very jealous of your new monster Dells and the fact you got off the treadmill (want to rent me a small slice for something I can afford...) But as a solo developer, who is extremely grateful for the technical compression the Rails community and you have delivered over the years, I can not put a price on the value of A) Not having to worry about anything DevOps; B) The comfort of 10+ years of using a system and feeling like you know all of it's corners I'm very much crossing my fingers that Rails 8 reduces the moving parts enough that the learning curve of a new deployment strategy becomes within reach. |
@hms You're the ideal target for the progress we're bringing to the deployment story in Rails 8. Stay tuned for Rails World! But in the meantime, I'd try with the puma plugin approach. |
@dhh In my case, I have at least split the web server from the SolidQueue environments, so I'm living large with 512MB x2. I'm just bristling at the fact that I have to go from $9 to $50 a month to double that memory. |
Highway robbery. Selling 512MB instances in 2024 is something. |
@rosa
At the risk of your crafting a Voodoo doll of me, and using it every time I reach out... Without knowing / understanding your design criteria and objectives, I'm at risk of asking poor questions or making bad suggestions, but here it goes anyway.... I'm going to apologize in advance for being "That Guy".
With the new V0.9 release (no tasks, nothing run), a fresh startup of SolidQueue, I see the following memory footprint (OSX):
Once the Jobs actually do something of value, the worker reliably grows to 200Mb plus (I'm looking at your ActiveRecord...). For those of us running on cloud services and a shoestring budget, that's already tight. I'm my case, I run a second Worker to isolate high memory jobs so I can "recycle on OOM" while still servicing everything else via the other worker.
I can purchase my way into additional memory resources at a cost of 10x (literally) what I'm paying now. And it only goes up from there. So this issue is real and painful for me, and I would guess a bunch of other folks running on shoestring budgets.
I'm sure there are use-cases where larger deployments would want a Dispatcher without a Supervisor, so I think understand the rational for the current design. But it would be nice if there was a way to via configuration to have a SuperDispatcherVisor... have the supervisor take on the dispatchers responsibilities and allow us to reclaim 110Mb+.
The text was updated successfully, but these errors were encountered: