Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug in tokenize_dataset_rows.py and infer.ipynb #125

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
27 changes: 13 additions & 14 deletions infer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -134,26 +134,25 @@
" for idx, item in enumerate(instructions[:3]):\n",
" feature = format_example(item)\n",
" input_text = feature['context']\n",
" ids = tokenizer.encode(input_text)\n",
" input_ids = torch.LongTensor([ids])\n",
" out = model.generate(\n",
" input_ids=input_ids,\n",
" max_length=150,\n",
" do_sample=False,\n",
" temperature=0\n",
" )\n",
" out_text = tokenizer.decode(out[0])\n",
" answer = out_text.replace(input_text, \"\").replace(\"\\nEND\", \"\").strip()\n",
" input_ids = tokenizer.encode(input_text, return_tensors=\"pt\")\n",
" inputs = model.prepare_inputs_for_generation(input_ids)\n",
" for k,v in inputs.items():\n",
" if v is not None:\n",
" inputs[k] = v.to(\"cuda\")\n",
" outputs = model.generate(**inputs, max_length=512, eos_token_id=tokenizer.eop_token_id)\n",
" out = outputs[0].tolist()[input_ids.size()[-1]:]\n",
" answer = tokenizer.decode(out)\n",
" item['infer_answer'] = answer\n",
" print(out_text)\n",
" print(input_text)\n",
" print(answer)\n",
" print(f\"### {idx+1}.Answer:\\n\", item.get('output'), '\\n\\n')\n",
" answers.append({'index': idx, **item})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"display_name": "Python 3.9.6 64-bit",
"language": "python",
"name": "python3"
},
Expand All @@ -167,12 +166,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
"version": "3.9.6"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "25273a2a68c96ebac13d7fb9e0db516f9be0772777a0507fe06d682a441a3ba7"
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
}
}
},
Expand Down
6 changes: 4 additions & 2 deletions tokenize_dataset_rows.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,15 @@
def preprocess(tokenizer, config, example, max_seq_length):
prompt = example["context"]
target = example["target"]
prompt_ids = tokenizer.encode(prompt, max_length=max_seq_length, truncation=True)
prompt_ids = tokenizer.encode(prompt, max_length=max_seq_length, truncation=True,return_attention_mask=False,
add_special_tokens=False)
target_ids = tokenizer.encode(
target,
max_length=max_seq_length,
truncation=True,
return_attention_mask=False,
add_special_tokens=False)
input_ids = prompt_ids + target_ids + [config.eos_token_id]
input_ids = prompt_ids + [150001, 150004] + target_ids + [150005]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这块修改和感觉是不是等价的?

prompt_ids = tokenizer.encode(**, add_special_tokens=True) == prompt_ids = tokenizer.encode(**, add_special_tokens=False) + [150001, 150004]

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果用魔法数字会不会不太好,我看官方又更新了下token_id 🤣

Copy link
Owner

@mymusise mymusise Apr 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看了下官方的预处理代码,好像用它提供的build_inputs_with_special_tokens更好

prompt_ids = tokenizer.encode(, add_special_tokens=False)
target_ids = tokenizer.encode(, add_special_tokens=False)
tokenizer.build_inputs_with_special_tokens(prompt_ids, target_ids )

return {"input_ids": input_ids, "seq_len": len(prompt_ids)}


Expand Down