You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This isn't an easy feature to implement, if it's possible at all. The problem is that 16-bit floats are a part of the quantization structures GGML uses, so you need to be able to read them.
There might be a way to read 16-bit floats as integers (Cleanest way would be 8-bit int, but that needs another extension VK_KHR_8bit_storage), for example using uintBitsToFloat, but that's for 32-bit floats. For 16-bit floats you need uint16BitsToHalf, but that needs the extension VK_KHR_shader_float16_int8. I doubt hardware that doesn't support 16-bit storage supports 16-bit shader calculations.
Assuming you found a way to solve this you'd have to implement fallbacks for all shaders that have 16-bit inputs.
Prerequisites
Feature Description
Currently, only Vulkan devices with 16-bit storage are supported.
Motivation
To be able to run accelerated llama-cpp on Vulkan devices that don't support 16-bit storage.
Possible Implementation
I can try to implement this. Could you please hint at where to start, what would have to be modified and what are the possible caveats?
The text was updated successfully, but these errors were encountered: