-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use waifu2x Docker image without Nvidia #444
Comments
这是章泽鑫的邮箱,我收到你的邮件了。
|
This repo does not support CPU inference. pytorch version supports CPU inference mode with But without GPU, it may be hundreds of times slower. |
Many thanks for responding, what are my options as I’m very keen to upscale some old photos My home set up is an iPad and a NAS (with no GPU), where I run a number of docker containers. Speed would be nice, but it’s not crucial, is there a guide for idiots I can follow ? other options I have are an old laptop running windows, and old desktop pc, running Linux, but they both have ATI graphic cards by the looks of it.. can they be used ? |
I just added Dockerfile on nunif repo. build docker
run web server with CPU mode
CLI command examples are described in the following link. For Docker, you need to mount the host volume where the images are stored. |
Thanks so much @nagadomi , I’ve not built an image from a dockerfile before so this is new ground for me, but excited to try it.. |
Hi @nagadomi , quick update to say I’ve created the image and created the container too, (thank you !!) FYI on the logs..
The first photo I tried was 1.2mb and it gave me an error, saying it was too large, the next one was 780kb and look like it was imported , but after that for a long time, it just gave a blank page, with the tab saying |
That CPU may not support AVX2.
|
Hi, sure here is the CPU information
|
The problem is that PyTorch's prebuilt library uses AVX/AVX2 instructions, so it cannot run on devices that do not support AVX. However, it is probably even slower than normal CPU processing, so if you have a CPU that includes |
Thanks so much, good to know, Sadly I don’t have another device to use, but happy to help you test out a non-Avx build. I’d love to give life to some of my old family photos While I have you, as it seems there’s likely to be a clear performance verse image size (Mb) correlation? |
Conversion time depends on the resolution (pixel size) of the input image. If you do not have that many photos you wish to convert, I recommend using the web service I have made available. Also, I am currently developing a new photo model and will release it this month. |
Thanks @nagadomi More than happy to wait for your new release - as my focus is on finding tools to restore/enhance all the old family photos I have and will be scanning in over the coming weeks. It looks like I will eventually need to invest in a device that has a GPU , can I confirm that GPU has to be Nvidia not another brand/make e.g Intel, ATI, AMX etc..? Also what specification of machine should be used CPU type/speed, memory etc. (I assume a RaspberryPi is not viable)? |
PyTorch supports AMD GPU(ROCm), and macOS(MPS), so it will work if the device is not too old. If you scan a photo, it may be enough to scan it as large as possible and then downsize it. |
Hi @nagadomi While I still want to honour the focus of this post and see how I can work with a not GPU device (like my NAS) .. Looking at Nvidia cards on ebay to potentially build an entry level PC suitable for waifu2x, other than at least 4GB of memory is there anything else I should look for ? Do I need a certain type of Nvidia chipset or version - or can I just use anything Nvidia branded ? |
I am not a hardware consultant so I do not make responsible recommendations. cuDNN(GPU-accelerated library) is required for Compute Capability 5.0 or later. Note that GPU is large and consumes a lot of power, so there may not be enough space to fit them, or the power supply unit may not be enough. |
I have created a Dockerfile that builds pytorch from source code. build
run web server with CPU mode
When I tried it, it took 1 minute to convert one image with style=photo(old model) and 10 minutes with style=Artwork(new model). |
Thanks, I’m still new to using Docker from the command line, i tried the above command, but it gave me an error message.
If I try in another location (not tmp) it progresses a bit more, but still ends with an error (see below)
|
sorry,
|
Hi,
Reading the instructions it says Docker image (https://hub.docker.com/r/nagadomi/waifu2x )
Requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
I appreciate it will not be as quick without GPU support, but i’l like to use waifu2x on my QNAP NAS which does not have an Nvidia/graphic support..
The text was updated successfully, but these errors were encountered: