Is mamba supported already in the current version of llama.cpp this library uses? (https://github.com/ggerganov/llama.cpp/pull/5328)