You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm planning to move my Nextcloud instance from my does-it-all old x86-64 homeserver to a dedicated ARMHF platform.
The ML runs are very expensive with regards to memory and when running the main NC server on an ARM device, it might be beneficial to be able to run the model in a container on a different device, which could e.g. be another ARMHF device with a Rockchip SoC with integrated NPU. In the home user space that would mean, the main NC could run e.g. on an RPi5 and if the user wants ML features, they could buy e.g. said Rockchip SBC and can run recognize without the main NC SBC exploding.
I believe, that would also have advantages in SMB or Enterprise infrastructures, where the model could then run on a different machine with accelerator or GPU-compute cards or in some cloud ML service. Servers with lots of drives often don't have additional space for GPUs.
Additionally, there would be all the advantages of not "having all eggs in one basket", e.g. no NC downtime when the ML box needs maintenance, etc.
I'm thinking, as the model anyways needs to be downloaded, how about a config option "Use external ML" which would then require network address configuration and tells recognize to use that external ML instance instead of looking for a downloaded model. So the user can choose to do it the current way or use external ML.
Describe the solution you'd like
The solution would be to separate the ML from the recognize app logically and optionally run the ML in a separate container.
Describe alternatives you've considered
The alternative is to leave things as they are with the advantage of simplicity but losing the advantages mentioned above.
The text was updated successfully, but these errors were encountered:
Thank you for taking the time to open this issue with recognize. I know it's frustrating when software
causes problems. You have made the right choice to come here and open an issue to make sure your problem gets looked at
and if possible solved.
I try to answer all issues and if possible fix all bugs here, but it sometimes takes a while until I get to it.
Until then, please be patient.
Note also that GitHub is a place where people meet to make software better together. Nobody here is under any obligation
to help you, solve your problems or deliver on any expectations or demands you may have, but if enough people come together we can
collaborate to make this software better. For everyone.
Thus, if you can, you could also look at other issues to see whether you can help other people with your knowledge
and experience. If you have coding experience it would also be awesome if you could step up to dive into the code and
try to fix the odd bug yourself. Everyone will be thankful for extra helping hands!
One last word: If you feel, at any point, like you need to vent, this is not the place for it; you can go to the forum,
to twitter or somewhere else. But this is a technical issue tracker, so please make sure to
focus on the tech and keep your opinions to yourself. (Also see our Code of Conduct. Really.)
I look forward to working with you on this issue
Cheers 💙
Hello @sgofferj
Thank you for your feedback!
This is already being considered as part of #73
It will take some time until I can get to this as part of my job at Nextcloud GmbH, however.
Describe the feature you'd like to request
I'm planning to move my Nextcloud instance from my does-it-all old x86-64 homeserver to a dedicated ARMHF platform.
The ML runs are very expensive with regards to memory and when running the main NC server on an ARM device, it might be beneficial to be able to run the model in a container on a different device, which could e.g. be another ARMHF device with a Rockchip SoC with integrated NPU. In the home user space that would mean, the main NC could run e.g. on an RPi5 and if the user wants ML features, they could buy e.g. said Rockchip SBC and can run recognize without the main NC SBC exploding.
I believe, that would also have advantages in SMB or Enterprise infrastructures, where the model could then run on a different machine with accelerator or GPU-compute cards or in some cloud ML service. Servers with lots of drives often don't have additional space for GPUs.
Additionally, there would be all the advantages of not "having all eggs in one basket", e.g. no NC downtime when the ML box needs maintenance, etc.
I'm thinking, as the model anyways needs to be downloaded, how about a config option "Use external ML" which would then require network address configuration and tells recognize to use that external ML instance instead of looking for a downloaded model. So the user can choose to do it the current way or use external ML.
Describe the solution you'd like
The solution would be to separate the ML from the recognize app logically and optionally run the ML in a separate container.
Describe alternatives you've considered
The alternative is to leave things as they are with the advantage of simplicity but losing the advantages mentioned above.
The text was updated successfully, but these errors were encountered: