-
Notifications
You must be signed in to change notification settings - Fork 165
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Split text samples to sepparate folders
- Loading branch information
Showing
29 changed files
with
532 additions
and
98 deletions.
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
find_package(OpenVINOGenAI REQUIRED PATHS | ||
"${CMAKE_BINARY_DIR}" # Reuse the package from the build. | ||
${OpenVINO_DIR} # GenAI may be installed alogside OpenVINO. | ||
) | ||
add_executable(beam_search_causal_lm beam_search_causal_lm.cpp) | ||
target_link_libraries(beam_search_causal_lm PRIVATE openvino::genai) | ||
target_compile_features(beam_search_causal_lm PRIVATE cxx_std_17) | ||
install(TARGETS beam_search_causal_lm | ||
RUNTIME DESTINATION samples_bin/ | ||
COMPONENT samples_bin | ||
EXCLUDE_FROM_ALL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# Text generation C++ sample that supports most popular models like LLaMA 2 | ||
|
||
This example showcases inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The application don't have many configuration options to encourage the reader to explore and modify the source code. It's only possible to change the device for inference to a differnt one, GPU for example, from the command line interface. The sample fearures `ov::genai::LLMPipeline` and configures it to use multiple beam grops. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python. | ||
|
||
## Install OpenVINO | ||
|
||
Install [OpenVINO Archives >= 2024.2](docs.openvino.ai/install). `master` and possibly the latest `releases/*` branch correspond to not yet released OpenVINO versions. https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/ can be used for these branches early testing. `<INSTALL_DIR>` below refers to the extraction location. | ||
|
||
## Install OpenVINOGenAI | ||
|
||
Follow [../../../src/README.md](../../../src/README.md). | ||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
#### Linux/macOS | ||
|
||
```sh | ||
source <INSTALL_DIR>/setupvars.sh | ||
python3 -m pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
#### Windows | ||
|
||
```bat | ||
<INSTALL_DIR>\setupvars.bat | ||
python -m pip install --upgrade-strategy eager -r requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Run | ||
|
||
### Usage: | ||
`beam_search_causal_lm <MODEL_DIR> "<PROMPT>"` | ||
|
||
### Examples: | ||
|
||
#### Linux/MacOS: | ||
`./build/samples/cpp/beam_search_causal_lm/beam_search_causal_lm ./TinyLlama-1.1B-Chat-v1.0/ "Why is the Sun yellow?"` | ||
|
||
#### Windows: | ||
`.\build\samples\cpp\beam_search_causal_lm\Release\beam_search_causal_lm .\TinyLlama-1.1B-Chat-v1.0\ "Why is the Sun yellow?"` | ||
|
||
To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot. | ||
|
||
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU. | ||
|
||
See [../../../src/README.md#supported-models](../../src/README.md#supported-models) for the list of supported models. |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
find_package(OpenVINOGenAI REQUIRED PATHS | ||
"${CMAKE_BINARY_DIR}" # Reuse the package from the build. | ||
${OpenVINO_DIR} # GenAI may be installed alogside OpenVINO. | ||
) | ||
add_executable(chat_sample chat_sample.cpp) | ||
target_link_libraries(chat_sample PRIVATE openvino::genai) | ||
target_compile_features(chat_sample PRIVATE cxx_std_17) | ||
install(TARGETS chat_sample | ||
RUNTIME DESTINATION samples_bin/ | ||
COMPONENT samples_bin | ||
EXCLUDE_FROM_ALL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# C++ chat_sample that supports most popular models like LLaMA 2 | ||
|
||
This example showcases inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The application don't have many configuration options to encourage the reader to explore and modify the source code. For example, change the device for inference to GPU. The sample fearures `ov::genai::LLMPipeline` and configures it for the chat scenario. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python. | ||
|
||
## Install OpenVINO | ||
|
||
Install [OpenVINO Archives >= 2024.2](docs.openvino.ai/install). `master` and possibly the latest `releases/*` branch correspond to not yet released OpenVINO versions. https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/ can be used for these branches early testing. `<INSTALL_DIR>` below refers to the extraction location. | ||
|
||
## Install OpenVINOGenAI | ||
|
||
Follow [../../../src/README.md](../../../src/README.md). | ||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
#### Linux/macOS | ||
|
||
```sh | ||
source <INSTALL_DIR>/setupvars.sh | ||
python3 -m pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
#### Windows | ||
|
||
```bat | ||
<INSTALL_DIR>\setupvars.bat | ||
python -m pip install --upgrade-strategy eager -r requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Run | ||
|
||
### Usage: | ||
`chat_sample <MODEL_DIR>` | ||
|
||
### Examples: | ||
|
||
#### Linux/MacOS: | ||
`./build/samples/cpp/chat_sample/chat_sample ./TinyLlama-1.1B-Chat-v1.0/` | ||
|
||
#### Windows: | ||
`.\build\samples\cpp\chat_sample\Release\chat_sample .\TinyLlama-1.1B-Chat-v1.0\` | ||
|
||
To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot. | ||
|
||
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU. | ||
|
||
See [../../../src/README.md#supported-models](../../src/README.md#supported-models) for the list of supported models. |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
find_package(OpenVINOGenAI REQUIRED PATHS | ||
"${CMAKE_BINARY_DIR}" # Reuse the package from the build. | ||
${OpenVINO_DIR} # GenAI may be installed alogside OpenVINO. | ||
) | ||
add_executable(greedy_causal_lm greedy_causal_lm.cpp) | ||
target_link_libraries(greedy_causal_lm PRIVATE openvino::genai) | ||
target_compile_features(greedy_causal_lm PRIVATE cxx_std_17) | ||
install(TARGETS greedy_causal_lm | ||
RUNTIME DESTINATION samples_bin/ | ||
COMPONENT samples_bin | ||
EXCLUDE_FROM_ALL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# Text generation C++ greedy_causal_lm that supports most popular models like LLaMA 2 | ||
|
||
This example showcases inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The application don't have many configuration options to encourage the reader to explore and modify the source code. It's only possible to change the device for inference to a differnt one, GPU for example, from the command line interface. The sample fearures `ov::genai::LLMPipeline` and configures it to run the simplest deterministic greedy sampling algorithm. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python. | ||
|
||
## Install OpenVINO | ||
|
||
Install [OpenVINO Archives >= 2024.2](docs.openvino.ai/install). `master` and possibly the latest `releases/*` branch correspond to not yet released OpenVINO versions. https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/ can be used for these branches early testing. `<INSTALL_DIR>` below refers to the extraction location. | ||
|
||
## Install OpenVINOGenAI | ||
|
||
Follow [../../../src/README.md](../../../src/README.md). | ||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
#### Linux/macOS | ||
|
||
```sh | ||
source <INSTALL_DIR>/setupvars.sh | ||
python3 -m pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
#### Windows | ||
|
||
```bat | ||
<INSTALL_DIR>\setupvars.bat | ||
python -m pip install --upgrade-strategy eager -r requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Run | ||
|
||
### Usage: | ||
`greedy_causal_lm <MODEL_DIR> "<PROMPT>"` | ||
|
||
### Examples: | ||
|
||
#### Linux/MacOS: | ||
`./build/samples/cpp/greedy_causal_lm/greedy_causal_lm ./TinyLlama-1.1B-Chat-v1.0/ "Why is the Sun yellow?"` | ||
|
||
#### Windows: | ||
`.\build\samples\cpp\greedy_causal_lm\Release\greedy_causal_lm .\TinyLlama-1.1B-Chat-v1.0\ "Why is the Sun yellow?"` | ||
|
||
To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot. | ||
|
||
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU. | ||
|
||
See [../../../src/README.md#supported-models](../../src/README.md#supported-models) for the list of supported models. |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
find_package(OpenVINOGenAI REQUIRED PATHS | ||
"${CMAKE_BINARY_DIR}" # Reuse the package from the build. | ||
${OpenVINO_DIR} # GenAI may be installed alogside OpenVINO. | ||
) | ||
add_executable(multinomial_causal_lm multinomial_causal_lm.cpp) | ||
target_link_libraries(multinomial_causal_lm PRIVATE openvino::genai) | ||
target_compile_features(greedy_causal_lm PRIVATE cxx_std_17) | ||
install(TARGETS multinomial_causal_lm | ||
RUNTIME DESTINATION samples_bin/ | ||
COMPONENT samples_bin | ||
EXCLUDE_FROM_ALL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# Text generation C++ multinomial_causal_lm that supports most popular models like LLaMA 2 | ||
|
||
This example showcases inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The application don't have many configuration options to encourage the reader to explore and modify the source code. It's only possible to change the device for inference to a differnt one, GPU for example, from the command line interface. The sample fearures `ov::genai::LLMPipeline` and configures it to run random sampling algorithm. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python. | ||
|
||
## Install OpenVINO | ||
|
||
Install [OpenVINO Archives >= 2024.2](docs.openvino.ai/install). `master` and possibly the latest `releases/*` branch correspond to not yet released OpenVINO versions. https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/ can be used for these branches early testing. `<INSTALL_DIR>` below refers to the extraction location. | ||
|
||
## Install OpenVINOGenAI | ||
|
||
Follow [../../../src/README.md](../../../src/README.md). | ||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
#### Linux/macOS | ||
|
||
```sh | ||
source <INSTALL_DIR>/setupvars.sh | ||
python3 -m pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
#### Windows | ||
|
||
```bat | ||
<INSTALL_DIR>\setupvars.bat | ||
python -m pip install --upgrade-strategy eager -r requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Run | ||
|
||
### Usage: | ||
`multinomial_causal_lm <MODEL_DIR> "<PROMPT>"` | ||
|
||
### Examples: | ||
|
||
#### Linux/MacOS: | ||
`./build/samples/cpp/multinomial_causal_lm/multinomial_causal_lm ./TinyLlama-1.1B-Chat-v1.0/ "Why is the Sun yellow?"` | ||
|
||
#### Windows: | ||
`.\build\sampels\cpp\multinomial_causal_lm\Release\multinomial_causal_lm .\TinyLlama-1.1B-Chat-v1.0\ "Why is the Sun yellow?"` | ||
|
||
To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot. | ||
|
||
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU. | ||
|
||
See [../../../src/README.md#supported-models](../../src/README.md#supported-models) for the list of supported models. |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
if(TARGET openvino_tokenizers) | ||
set(OPENVINO_TOKENIZERS_PATH $<TARGET_FILE:openvino_tokenizers>) | ||
else() | ||
message(FATAL_ERROR "multinomial_causal_lm must be compiled as part of OpenVIINOGenAI to have the path to openvino_tokenizers hardcoded.") | ||
endif() | ||
find_package(OpenVINO REQUIRED COMPONENTS Runtime) | ||
find_package(TBB REQUIRED COMPONENTS tbb) | ||
add_executable(prompt_lookup_decoding_lm prompt_lookup_decoding_lm.cpp) | ||
target_link_libraries(prompt_lookup_decoding_lm PRIVATE openvino::runtime TBB::tbb) | ||
target_compile_definitions(prompt_lookup_decoding_lm PRIVATE OPENVINO_TOKENIZERS_PATH="${OPENVINO_TOKENIZERS_PATH}") | ||
target_compile_features(prompt_lookup_decoding_lm PRIVATE cxx_std_17) | ||
install(TARGETS prompt_lookup_decoding_lm | ||
RUNTIME DESTINATION samples_bin/ | ||
COMPONENT samples_bin | ||
EXCLUDE_FROM_ALL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# prompt_lookup_decoding_lm C++ sample that supports most popular models like LLaMA 2 | ||
|
||
[Prompt Lookup decoding](https://github.com/apoorvumang/prompt-lookup-decoding) is [assested-generation](https://huggingface.co/blog/assisted-generation#understanding-text-generation-latency) technique where the draft model is replaced with simple string matching the prompt to generate candidate token sequences. This method highly effective for input grounded generation (summarization, document QA, multi-turn chat, code editing), where there is high n-gram overlap between LLM input (prompt) and LLM output. This could be entity names, phrases, or code chunks that the LLM directly copies from the input while generating the output. Prompt lookup exploits this pattern to speed up autoregressive decoding in LLMs. This results in significant speedups with no effect on output quality. | ||
|
||
This example showcases inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The application don't have many configuration options to encourage the reader to explore and modify the source code. Loading `openvino_tokenizers` to `ov::Core` enables tokenization. Run `optimum-cli` to generate IRs for the samples. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python. | ||
|
||
## Install OpenVINO | ||
|
||
Install [OpenVINO Archives >= 2024.2](docs.openvino.ai/install). `master` and possibly the latest `releases/*` branch correspond to not yet released OpenVINO versions. https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/ can be used for these branches early testing. `<INSTALL_DIR>` below refers to the extraction location. | ||
|
||
## Install OpenVINOGenAI | ||
|
||
Follow [../../../src/README.md](../../../src/README.md). | ||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
#### Linux/macOS | ||
|
||
```sh | ||
source <INSTALL_DIR>/setupvars.sh | ||
python3 -m pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
#### Windows | ||
|
||
```bat | ||
<INSTALL_DIR>\setupvars.bat | ||
python -m pip install --upgrade-strategy eager -r requirements.txt | ||
optimum-cli export openvino --trust-remote-code --weight-format fp16 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Run | ||
|
||
### Usage: | ||
`prompt_lookup_decoding_lm <MODEL_DIR> "<PROMPT>"` | ||
|
||
### Examples: | ||
|
||
#### Linux/MacOS: | ||
`./build/samples/cpp/prompt_lookup_decoding_lm/prompt_lookup_decoding_lm ./TinyLlama-1.1B-Chat-v1.0/ "return 0;"` | ||
|
||
#### Windows: | ||
`.\build\samples\cpp\prompt_lookup_decoding_lm\Release\prompt_lookup_decoding_lm .\TinyLlama-1.1B-Chat-v1.0\ "return 0;"` | ||
|
||
To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot. | ||
|
||
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU. | ||
|
||
See [../../../src/README.md#supported-models](../../src/README.md#supported-models) for the list of supported models. |
1 change: 1 addition & 0 deletions
1
...usal_lm/cpp/prompt_lookup_decoding_lm.cpp → ...decoding_lm/prompt_lookup_decoding_lm.cpp
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
Oops, something went wrong.