Skip to content

Commit

Permalink
Port fix for symbol encode error
Browse files Browse the repository at this point in the history
  • Loading branch information
yatarkan committed Jul 18, 2024
1 parent fcc309e commit 1d35466
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/causal_lm_cpp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,8 @@ jobs:
cpp-greedy_causal_lm-windows:
runs-on: windows-latest
env:
PYTHONIOENCODING: "utf8"
defaults:
run:
shell: cmd
Expand Down Expand Up @@ -626,6 +628,8 @@ jobs:
cpp-continuous-batching-windows:
runs-on: windows-latest
env:
PYTHONIOENCODING: "utf8"
defaults:
run:
shell: cmd
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/genai_package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ jobs:
runs-on: windows-latest
env:
CMAKE_BUILD_PARALLEL_LEVEL: null
PYTHONIOENCODING: "utf8"
defaults:
run:
shell: cmd
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/genai_python_lib.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ jobs:
runs-on: windows-latest
env:
CMAKE_BUILD_PARALLEL_LEVEL: null
PYTHONIOENCODING: "utf8"
defaults:
run:
shell: cmd
Expand Down
1 change: 1 addition & 0 deletions samples/cpp/beam_search_causal_lm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ optimum-cli export openvino --trust-remote-code --model TinyLlama/TinyLlama-1.1B
`beam_search_causal_lm TinyLlama-1.1B-Chat-v1.0 "Why is the Sun yellow?"`

To enable Unicode characters for Windows cmd open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot.
Also, you can enable UTF-8 mode by setting environment variable `PYTHONIOENCODING="utf8"`.

Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU.

Expand Down

0 comments on commit 1d35466

Please sign in to comment.