You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following models are intended to be evaluated using bfloat16 precision instead of float16 according to their model card on HuggingFace. We should change the default precision setting for their model handlers. This means they cannot be evaluated using v100 GPUs.
deepseek-ai/deepseek-coder-6.7b-instruct
google/gemma-7b-it
meetkai/functionary-small-v2.2-FC
meetkai/functionary-medium-v2.2-FC
meetkai/functionary-small-v2.4-FC
meetkai/functionary-medium-v2.4-FC
NousResearch/Hermes-2-Pro-Llama-3-70B
NousResearch/Hermes-2-Pro-Mistral-7B
NousResearch/Hermes-2-Theta-Llama-3-8B
NousResearch/Hermes-2-Theta-Llama-3-70B
meta-llama/Meta-Llama-3-8B-Instruct
meta-llama/Meta-Llama-3-70B-Instruct
ibm-granite/granite-20b-functioncalling
THUDM/glm-4-9b-chat
The text was updated successfully, but these errors were encountered:
The following models are intended to be evaluated using
bfloat16
precision instead offloat16
according to their model card on HuggingFace. We should change the default precision setting for their model handlers. This means they cannot be evaluated using v100 GPUs.deepseek-ai/deepseek-coder-6.7b-instruct
google/gemma-7b-it
meetkai/functionary-small-v2.2-FC
meetkai/functionary-medium-v2.2-FC
meetkai/functionary-small-v2.4-FC
meetkai/functionary-medium-v2.4-FC
NousResearch/Hermes-2-Pro-Llama-3-70B
NousResearch/Hermes-2-Pro-Mistral-7B
NousResearch/Hermes-2-Theta-Llama-3-8B
NousResearch/Hermes-2-Theta-Llama-3-70B
meta-llama/Meta-Llama-3-8B-Instruct
meta-llama/Meta-Llama-3-70B-Instruct
ibm-granite/granite-20b-functioncalling
THUDM/glm-4-9b-chat
The text was updated successfully, but these errors were encountered: