Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add IPEX-XPU support for Llama2 model Inference #703

Draft
wants to merge 38 commits into
base: main
Choose a base branch
from

Conversation

faaany
Copy link
Contributor

@faaany faaany commented May 8, 2024

What does this PR do?

This PR enables Intel GPU support for Llama2 model inference in optimum-intel. Below is a code example:

import torch 
from transformers import AutoTokenizer, pipeline
from optimum.intel import IPEXModelForCausalLM

model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = IPEXModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, export=True)
pipe = pipeline("text-generation", model=model, device="xpu", tokenizer=tokenizer, do_sample=False, num_beams=1, use_cache=True)
results = pipe("He's a dreadful magician and")
print(results)
#####[{'generated_text': "He's a dreadful magician and he's always getting things wrong. But he's got a heart of gold and he's always trying his best.\n\nThe other magicians in the circus are not very nice to him. They make fun of him and call him names. But Mr. Higglebottom doesn't let it get him down. He just keeps on trying and practicing his magic tricks.\n\nOne day, the circus is in town and Mr. Higglebottom is given the chance to perform in front of a big audience. He's nervous but he's determined to do his best. And to everyone's surprise, he actually manages to pull off a few good tricks! The audience cheers and claps for him and he feels proud of himself.\n\nFrom that day on, Mr. Higglebottom is no longer the laughing stock of the circus. He's respected and admired by all the other performers and he's finally found his place in the circus. He's learned that it's okay to make mistakes and that with hard work and determination, anything is possible."}]

optimum/exporters/ipex/model_patcher.py Outdated Show resolved Hide resolved
optimum/intel/ipex/modeling_base.py Outdated Show resolved Hide resolved
@faaany faaany marked this pull request as ready for review May 9, 2024 01:42
@faaany
Copy link
Contributor Author

faaany commented May 9, 2024

Hi @echarlaix , this PR is a joint effort of @jiqing-feng, @ganyi1996ppo, and me. Could you pls help review this PR? Thanks a lot!

@faaany
Copy link
Contributor Author

faaany commented May 9, 2024

@yao-matrix

@faaany faaany closed this May 15, 2024
@faaany faaany reopened this May 26, 2024
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@faaany faaany changed the title add IPEX-XPU support for Llama2 model Inference (greedy search) add IPEX-XPU support for Llama2 model Inference May 26, 2024
@faaany faaany closed this May 28, 2024
@faaany faaany reopened this May 28, 2024
@faaany faaany marked this pull request as draft May 28, 2024 08:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants