You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
在convert_hf_to_gguf.py文件中,转换MiniCPM模型的时候,如下类override了modify_tensors,并且只转换了q_proj.weight和k_proj.weight,请问为什么需要转换呢?或者如注释所说“HF models permute some of the tensors, so we need to undo that”,HF model是在那里做了这部分的permute呢?有点没搞清楚事情的原委。。求解答
@Model.register("MiniCPMForCausalLM")
class MiniCPMModel(Model):
model_arch = gguf.MODEL_ARCH.MINICPM
def set_gguf_parameters(self):
block_count = self.hparams["num_hidden_layers"]
self.gguf_writer.add_context_length(self.hparams["max_position_embeddings"])
self.gguf_writer.add_embedding_length(self.hparams["hidden_size"])
self.gguf_writer.add_block_count(block_count)
self.gguf_writer.add_feed_forward_length(self.hparams["intermediate_size"])
self.gguf_writer.add_rope_dimension_count(self.hparams["hidden_size"] // self.hparams["num_attention_heads"])
self.gguf_writer.add_head_count(self.hparams["num_attention_heads"])
self.gguf_writer.add_head_count_kv(self.hparams["num_key_value_heads"])
self.gguf_writer.add_layer_norm_rms_eps(self.hparams["rms_norm_eps"])
self.gguf_writer.add_file_type(self.ftype)
def set_vocab(self):
self._set_vocab_llama_hf()
def _reverse_hf_permute(self, weights: Tensor, n_head: int, n_kv_head: int | None = None) -> Tensor:
if n_kv_head is not None and n_head != n_kv_head:
n_head //= n_kv_head
return (
weights.reshape(n_head, 2, weights.shape[0] // n_head // 2, *weights.shape[1:])
.swapaxes(1, 2)
.reshape(weights.shape)
)
# 这里为什么需要做permute呢???
def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None) -> Iterable[tuple[str, Tensor]]:
del bid # unused
n_head = self.hparams["num_attention_heads"]
n_kv_head = self.hparams.get("num_key_value_heads")
# HF models permute some of the tensors, so we need to undo that
if name.endswith(("q_proj.weight")):
data_torch = self._reverse_hf_permute(data_torch, n_head, n_head)
if name.endswith(("k_proj.weight")):
data_torch = self._reverse_hf_permute(data_torch, n_head, n_kv_head)
return [(self.map_tensor_name(name), data_torch)]
我看了下原始的minicpm_modeling.py,也没看出来有啥不一样呢?
The text was updated successfully, but these errors were encountered:
在convert_hf_to_gguf.py文件中,转换MiniCPM模型的时候,如下类override了modify_tensors,并且只转换了q_proj.weight和k_proj.weight,请问为什么需要转换呢?或者如注释所说“HF models permute some of the tensors, so we need to undo that”,HF model是在那里做了这部分的permute呢?有点没搞清楚事情的原委。。求解答
我看了下原始的minicpm_modeling.py,也没看出来有啥不一样呢?
The text was updated successfully, but these errors were encountered: