You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this also applicable to android mobile, specifically?
You have mentioned a while ago the f16 512x512 version working on your 12gb ram android.
Also, is the 256x256 model a fixed shape, and is a quantized model?
The text was updated successfully, but these errors were encountered:
For some of the metrices, I have not updated them for some time and suggest you actually test them. All the models I have used are fp16, not quantitative one.
thanks. my bad, the models were obviously labeled.
I used your diffusers conversion repository successfully, may I ask if the current vae decoder provided via cloud drive is the separate nai vae, or the one built-in (regular SD)?
The int8 and/or uint8 quantization process is easy enough with onnx, but I don't know how to do this with .pt (pnnx)
Your ncnn implementation is currently as fast as the openvino implementation, and supports multiple sizes.
I am interested in quantization because you have an apk release that supports 6gb currently, and 4gb may be supported if the quantized model is used, or maybe some low ram optimization. I'm not sure.
https://github.com/fengwang/Stable-Diffusion-NCNN
hi, It looked like there was this memory optimization applicable to 8gb laptops (no swap requirement)
Is this also applicable to android mobile, specifically?
You have mentioned a while ago the f16 512x512 version working on your 12gb ram android.
Also, is the 256x256 model a fixed shape, and is a quantized model?
The text was updated successfully, but these errors were encountered: