-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up dedicated devices / Tensorflow serving / Use an ExApp as backend #73
Comments
This is a good idea, but it will take some time until I can get to it. |
I love this project It is very fun and the people begin clicking on cats (as this is the most accurate) and look how their cats looked past 2 years |
One advantage is that you don't have to overload a nextcloud container with extra packages. |
Maybe I can do a little beta Maybe a bit of bash script that lets the classifier work through SSH (rsync) |
I'm actually thinking about it because I don't want to use a bash script How do you use the node/tensorflow? I see that you use - instead of a file path. Is it a huge performance decreasment to spawn a tensorflow for every file? I should look into the code... |
The classifier scripts accept input either via cli args or as JSON via stdin. |
Ideally we would have a Tensorflow serving container and allow people to connect to it with recognize. |
This comment was marked as resolved.
This comment was marked as resolved.
Any updates on this? Would really appreciate this feature to be implemented or maybe some pointers how to start |
I can run the rest of Nextcloud on a 1GB VM. |
Great Idea. Up-Vote |
I think this would also be a good idea e.g. for AIO since libtensorflow does not seem to run in an alpine container but could then run from another container which could use debian as base. |
Setting up GPU access for a container is complicated. Anyone running Nextcloud in a container would probably want to send image analysis requests to a service running on the native OS. |
It's not that hard, I believe |
Have you tried it? The documented steps are expert admin level stuff |
I would love for this to become a reality. Im currently running NC in a Ubuntu VM for the sole purpose of using my Nvidia GPU with Recognize and Memories and I absolutely hate managing it. |
Oooh I would love this. Decentralized recognize service would allow for a greater deal of flexibility. I use a nextcloud container (linuxserver) that is alpine based, and recognize stopped working relatively recently due to changes in libtensorflow, prior I had it working with some container customization scripts... now that isn't working whihc is frustrating. @marcelklehr I see that you are looking at tensorflow serving, does that mean you're thinking of having a nextcloud recognize-proxy app that would interact with this novel instance (probably container...)?
I don't know what you mean it's expert admin level stuff. It's pretty simple... |
@marcelklehr |
IMO, a nodejs socket server could be run on the dedicated device and then the images sent and tagged with an ID; preferably 128 to 512 at once, because that's the amount modern graphics cards handle with 100% utilisation and not something like 7% because the images are not supplied in-time or do not have enough pixels. To set up a socket, It would also be very handy to have a configuration file. The easiest would be to write the variables exported into a javascript file and then import them in the actual program. Better, though, would be CSV files or something like them. Remember tis a simple draft. Per'aps someone would like to implement it. There are many solutions to the problem, and it would always be great if we could get an anwer from marcelklehr so we know what he actually likes to have. |
Why would a nodejs socket server be needed? I think you mean the training device (running TFserving) should fetch the jobs from the server but that's not how TFserving is meant to be used. |
@Leptopoda I fully agree with you. BTW |
I meant not to use TFserving, but make it on my own. But in case you want to use it, yes, may be better. |
Any update on this? |
(Commenting to add myself to notifications) |
Same situation here. Looks to me like a huge pain to get tensorflow running in the nextloud-aio alpine image. For me there also would be the benefit of using the power of my gamin pc. |
Nextcloud GmbH is planning to move the classifiers in recognize to docker containers as part of the External Apps Ecosystem in the coming months |
Sounds great! As soon as it is available via the External Apps Ecosystem, it will also automatically be available in AIO after one enables the docker socket proxy in the AIO interface and installs the app from the Nextcloud apps page :) |
Sorry to say, the plans have been scrapped due to lack of engineering time so far. It's still on our list of things that would be nice to have, but it's not scheduled any time soon for now :/ As mentioned in #1061 I'd be open to community contributions on this. My rough plan would be not to deviate too much from how the models are run right now. Instead of the Classifier class executing node.js directly, there would be an option in the settings to call out to the recognize External App instead, or perhaps the external app could be auto-detected. The external app would do the same thing as the Classifier class, execute node.js and return the json line results, so they can be processed in the original recognize app. These are the current docs on how App API / External Apps work: cloud-py-api.github.io/app_api/index.html |
Note, that you do have a little influence over what Nextcloud GmbH works on: We don't promise anything, but every release cycle we try to work on enhancements that get a lot of upvotes, so you may express your support for this by giving this issue an upvote. |
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as resolved.
This comment was marked as resolved.
Why do we not have a debian/non-alpine image based AIO container for more capable systems? I launched one locally for my system, but before I spend more time, want to check if there is even an appetite to launch something for users with more capable systems |
|
Thank you! So likely there's no appetite to move to Debian based images anytime soon. Did I understand it right? |
What base image would the team prefer for a container that can use GPU to run the classifiers? Context - I am currently working and testing out this feature |
I'd say |
We've been using nvidia/cuda:12.2.2-cudnn8-devel-ubuntu22.04 for our AI ex apps so far. |
Make an app that can be installed on other devices and a setting that if a device that has such app and is configured (and online) gets the model & pictures so it can process byself. My Server doesn't have a good GPU so I would like to run it on my fast er computer with a NVIDIA GeForce GTX 1660 super (or if a rasperry pi is online send it some data so it is a bit faster etc.)
The text was updated successfully, but these errors were encountered: