Replies: 3 comments 8 replies
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
The only thing that I have seen that is close to what you're looking for is "Distributed Transcoding". It's not much talked about but none the less there are still people working on such things for things like Plex Server Transcoding tasks being distributed to a Raspberry Pi 4 Cluster network working with the Master:Slave Structure. You may be able to find something in researching that. |
Beta Was this translation helpful? Give feedback.
-
Is there an example of a multi-server deployment? One producer and n+ workers? I can't seem to find an example to get an idea of what that looks like. |
Beta Was this translation helpful? Give feedback.
-
I am having a tough time finding any guidance on running multiple app instances all connected the he same distributed RedisDB. Below is how I was planning on setting up BullJs to run a transcoding service and simply want to know if my understanding is correct. Forgive me in advance if this question is stupid, I don't have a lot of experience in backend architecture. The only examples I really found only seem to demonstrate a scenario of one consumer instance running at a time.
Thought process / Scenario:
1.) Front end is notified video uploaded successfully and calls a micro service to add the video transcoding job to the Reddis queue (e.g. bulljs producer) and sends back the job object containing the job ID so the status can be tracked on the front end.
2.). I have five Nodejs consumer instances (e.g. docker containers) up and listening on the same RedisDB for jobs. Assume all five instances aren't processing anything when the job is queued, concurrency is set 1, and rate limit is 10. The first instance to respond to Reddis is awarded the job and begins processing but keeps listening since it's rate limit is 10 and only has one job. Is that accurate? My big concern is the same job will be processed by all five instances, instead of one. Moreover, if 11 more jobs come in and the same instance keeps "beating" the others to process the jobs, will that instance stop listening after the 10th job (one currently processing and 10 in that instance's queue), thus allowing another instance to pickup the job? <----anymore clarification in this step would be super helpful.
3.) My front end polls Reddis using the job ID to update loading indicator in the UI.
4.) The job finishes and the consumer instance begins listening for more jobs. Front end sees this and continues on...
Any help would be greatly appreciated!!!
Beta Was this translation helpful? Give feedback.
All reactions