Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

huggingface_bert_convert.py can't convert some key #152

Open
SeungjaeLim opened this issue Jul 3, 2023 · 0 comments
Open

huggingface_bert_convert.py can't convert some key #152

SeungjaeLim opened this issue Jul 3, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@SeungjaeLim
Copy link

SeungjaeLim commented Jul 3, 2023

Description

branch: v1.4
docker version: 22.12
huggingface_bert_convert.py can't convert some key

python3 FasterTransformer/examples/pytorch/bert/utils/huggingface_bert_convert.py \
        -in_file bert-base-uncased/ \
        -saved_dir ${WORKSPACE}/all_models/bert/fastertransformer/1/ \
        -infer_tensor_para_size 1

Response:

=============== Argument ===============
saved_dir: /home/{my_name}/fastertransformer_backend/all_models/bert/fastertransformer/1/
in_file: bert-base-uncased/
training_tensor_para_size: 1
infer_tensor_para_size: 2
processes: 4
weight_data_type: fp32
========================================
Some weights of the model checkpoint at bert-base-uncased/ were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING] cannot convert key 'embeddings.word_embeddings.weight'
[WARNING] cannot convert key 'embeddings.position_embeddings.weight'
[WARNING] cannot convert key 'embeddings.token_type_embeddings.weight'
[WARNING] cannot convert key 'embeddings.LayerNorm.weight'
[WARNING] cannot convert key 'embeddings.LayerNorm.bias'
[WARNING] cannot convert key 'pooler.dense.weight'
[WARNING] cannot convert key 'pooler.dense.bias'

Reproduced Steps

1.

git clone https://github.com/triton-inference-server/fastertransformer_backend.git
cd fastertransformer_backend
git checkout v1.4
export WORKSPACE=$(pwd)
export CONTAINER_VERSION=22.12
export TRITON_DOCKER_IMAGE=triton_with_ft:${CONTAINER_VERSION}
docker run -it --rm --gpus='device=1' --shm-size=1g --ulimit memlock=-1 -v ${WORKSPACE}:${WORKSPACE} -w ${WORKSPACE} ${TRITON_DOCKER_IMAGE} bash
# in docker
export WORKSPACE=$(pwd)
sudo apt-get install git-lfs
git lfs install
git lfs clone https://huggingface.co/bert-base-uncased # Download model from huggingface
git clone https://github.com/NVIDIA/FasterTransformer.git # To convert checkpoint
export PYTHONPATH=${WORKSPACE}/FasterTransformer:${PYTHONPATH}
python3 FasterTransformer/examples/pytorch/bert/utils/huggingface_bert_convert.py \
        -in_file bert-base-uncased/ \
        -saved_dir ${WORKSPACE}/all_models/bert/fastertransformer/1/ \
        -infer_tensor_para_size 1
@SeungjaeLim SeungjaeLim added the bug Something isn't working label Jul 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

1 participant