Skip to content

Commit

Permalink
Add language identifiers to code blocks
Browse files Browse the repository at this point in the history
  • Loading branch information
khipp committed Feb 10, 2024
1 parent 58e3d23 commit f5e09cf
Show file tree
Hide file tree
Showing 66 changed files with 137 additions and 137 deletions.
2 changes: 1 addition & 1 deletion docs/source/en/chat_templating.md
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ If your model expects those, they won't be added automatically by `apply_chat_te
text will be tokenized with `add_special_tokens=False`. This is to avoid potential conflicts between the template and
the `add_special_tokens` logic. If your model expects special tokens, make sure to add them to the template!

```
```python
tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/custom_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ Use `register_for_auto_class()` if you want the code files to be copied. If you
you don't need to call it. In cases where there's more than one auto class, you can modify the `config.json` directly using the
following structure:

```
```json
"auto_map": {
"AutoConfig": "<your-repo-name>--<config-name>",
"AutoModel": "<your-repo-name>--<config-name>",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/custom_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ Assistant:
Therefore it is important that the examples of the custom `chat` prompt template also make use of this format.
You can overwrite the `chat` template at instantiation as follows.

```
```python
template = """ [...] """

agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ pip install 'transformers[tf-cpu]'
M1 / ARM Users

You will need to install the following before installing TensorFLow 2.0
```
```bash
brew install cmake
brew install pkg-config
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/fastspeech2_conformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ You can run FastSpeech2Conformer locally with the 🤗 Transformers library.

1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers), g2p-en:

```
```bash
pip install --upgrade pip
pip install --upgrade transformers g2p-en
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/layoutlmv2.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ this https URL.*

LayoutLMv2 depends on `detectron2`, `torchvision` and `tesseract`. Run the
following to install them:
```
```bash
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/lilt.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The original code can be found [here](https://github.com/jpwang/lilt).
- To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the [hub](https://huggingface.co/models?search=roberta), refer to [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional).
The script will result in `config.json` and `pytorch_model.bin` files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account):

```
```python
from transformers import LiltModel

model = LiltModel.from_pretrained("path_to_your_files")
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/musicgen.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ The same [`MusicgenProcessor`] can be used to pre-process an audio prompt that i
following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command
below:

```
```bash
pip install --upgrade pip
pip install datasets[audio]
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/model_doc/pop2piano.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The original code can be found [here](https://github.com/sweetcocoa/pop2piano).
## Usage tips

* To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules:
```
```bash
pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy
```
Please note that you may need to restart your runtime after installation.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/perf_hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Next let's have a look at one of the most important aspects when having multiple

If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run:

```
```bash
nvidia-smi topo -m
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/perf_train_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ IPEX release is following PyTorch, to install via pip:
| 1.12 | 1.12.300+cpu |

Please run `pip list | grep torch` to get your `pytorch_version`, so you can get the `IPEX version_name`.
```
```bash
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
You can check the latest versions in [ipex-whl-stable-cpu](https://developer.intel.com/ipex-whl-stable-cpu) if needed.
Expand Down
12 changes: 6 additions & 6 deletions docs/source/en/perf_train_cpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Wheel files are available for the following Python versions:
| 1.12.0 | |||||

Please run `pip list | grep torch` to get your `pytorch_version`.
```
```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
```
where `{pytorch_version}` should be your PyTorch version, for instance 2.1.0.
Expand All @@ -59,13 +59,13 @@ Use this standards-based MPI implementation to deliver flexible, efficient, scal
oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it.

for Intel® oneCCL >= 1.12.0
```
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
```

for Intel® oneCCL whose version < 1.12.0
```
```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
```
Expand Down Expand Up @@ -154,7 +154,7 @@ This example assumes that you have:

The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then
extracts a Transformers release to the `/workspace` directory, so that the example scripts are included in the image:
```
```dockerfile
FROM intel/ai-workflows:torch-2.0.1-huggingface-multinode-py3.9

WORKDIR /workspace
Expand Down Expand Up @@ -286,7 +286,7 @@ set the same CPU and memory amounts for both the resource limits and requests.
After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed
to the cluster using:
```
```bash
kubectl create -f pytorchjob.yaml
```

Expand All @@ -304,7 +304,7 @@ transformers-pytorchjob-worker-3 1/1 Running
```

The logs for worker can be viewed using `kubectl logs -n kubeflow <pod name>`. Add `-f` to stream the logs, for example:
```
```bash
kubectl logs -n kubeflow transformers-pytorchjob-worker-0 -f
```

Expand Down
6 changes: 3 additions & 3 deletions docs/source/en/perf_train_gpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ Here is the benchmarking code and outputs:

**DP**

```
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
Expand All @@ -151,7 +151,7 @@ python examples/pytorch/language-modeling/run_clm.py \

**DDP w/ NVlink**

```
```bash
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
Expand All @@ -162,7 +162,7 @@ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \

**DDP w/o NVlink**

```
```bash
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/perf_train_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ of 23 bits precision it has only 10 bits (same as fp16) and uses only 19 bits in
you can use the normal fp32 training and/or inference code and by enabling tf32 support you can get up to 3x throughput
improvement. All you need to do is to add the following to your code:

```
```python
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tasks/video_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -483,7 +483,7 @@ You can also manually replicate the results of the `pipeline` if you'd like.
Now, pass your input to the model and return the `logits`:
```
```py
>>> logits = run_inference(trained_model, sample_test_video["video"])
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/fr/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Pour les architectures mac M1 / ARM

Vous devez installer les outils suivants avant d'installer TensorFLow 2.0

```
```bash
brew install cmake
brew install pkg-config
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/it/perf_hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Diamo quindi un'occhiata a uno degli aspetti più importanti quando si hanno pi

Se utilizzi più GPU, il modo in cui le schede sono interconnesse può avere un enorme impatto sul tempo totale di allenamento. Se le GPU si trovano sullo stesso nodo fisico, puoi eseguire:

```
```bash
nvidia-smi topo -m
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/chat_templating.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ LLM(Language Model)はさまざまな入力形式を処理できるほどス

If you like this one, here it is in one-liner form, ready to copy into your code:

```
```python
tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/custom_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@ Assistant:

したがって、カスタム`chat`プロンプトテンプレートの例もこのフォーマットを使用することが重要です。以下のように、インスタンス化時に`chat`テンプレートを上書きできます。

```
```python
template = """ [...] """

agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
Expand Down
6 changes: 3 additions & 3 deletions docs/source/ja/main_classes/deepspeed.md
Original file line number Diff line number Diff line change
Expand Up @@ -2202,7 +2202,7 @@ print(f"rank{rank}:\n in={text_in}\n out={text_out}")

それを`t0.py`として保存して実行しましょう。

```
```bash
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
Expand All @@ -2226,13 +2226,13 @@ DeepSpeed 統合を含む PR を送信する場合は、CircleCI PR CI セット

DeepSpeed テストを実行するには、少なくとも以下を実行してください。

```
```bash
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
```

モデリングまたは pytorch サンプル コードのいずれかを変更した場合は、Model Zoo テストも実行します。以下はすべての DeepSpeed テストを実行します。

```
```bash
RUN_SLOW=1 pytest tests/deepspeed
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/perf_hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ GPUが重要な負荷の下でどのような温度を目指すべきかを正
複数のGPUを使用する場合、カードの相互接続方法はトータルのトレーニング時間に大きな影響を与える可能性があります。GPUが同じ物理ノードにある場合、次のように実行できます:


```
```bash
nvidia-smi topo -m
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/perf_torch_compile.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda")

### Image Classification with ViT

```
```python
from PIL import Image
import requests
import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/perf_train_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ IPEXのリリースはPyTorchに従っており、pipを使用してインスト
| 1.11 | 1.11.200+cpu |
| 1.10 | 1.10.100+cpu |

```
```bash
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```

Expand Down
6 changes: 3 additions & 3 deletions docs/source/ja/perf_train_cpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Wheelファイルは、以下のPythonバージョン用に利用可能です:
| 1.11.0 | |||||
| 1.10.0 ||||| |

```
```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
```

Expand Down Expand Up @@ -70,13 +70,13 @@ oneccl_bindings_for_pytorchはMPIツールセットと一緒にインストー


for Intel® oneCCL >= 1.12.0
```
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
```

for Intel® oneCCL whose version < 1.12.0
```
```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/perf_train_gpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ DPとDDPの他にも違いがありますが、この議論には関係ありま
`NCCL_P2P_DISABLE=1`を使用して、対応するベンチマークでNVLink機能を無効にしました。


```
```bash

# DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/perf_train_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ training_args = TrainingArguments(bf16=True, **default_args)

アンペアハードウェアは、tf32という特別なデータ型を使用します。これは、fp32と同じ数値範囲(8ビット)を持っていますが、23ビットの精度ではなく、10ビットの精度(fp16と同じ)を持ち、合計で19ビットしか使用しません。これは通常のfp32トレーニングおよび推論コードを使用し、tf32サポートを有効にすることで、最大3倍のスループットの向上が得られる点で「魔法のよう」です。行う必要があるのは、次のコードを追加するだけです:

```
```python
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ja/tasks/video_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@ def compute_metrics(eval_pred):
次に、入力をモデルに渡し、`logits `を返します。
```
```py
>>> logits = run_inference(trained_model, sample_test_video["video"])
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ko/custom_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ Assistant:
따라서 사용자 정의 `chat` 프롬프트 템플릿의 예제에서도 이 형식을 사용하는 것이 중요합니다.
다음과 같이 인스턴스화 할 때 `chat` 템플릿을 덮어쓸 수 있습니다.

```
```python
template = """ [...] """

agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ko/perf_hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ GPU가 과열될 때 정확한 적정 온도를 알기 어려우나, 아마도 +

다중 GPU를 사용하는 경우 GPU 간의 연결 방식은 전체 훈련 시간에 큰 영향을 미칠 수 있습니다. 만약 GPU가 동일한 물리적 노드에 있을 경우, 다음과 같이 확인할 수 있습니다:

```
```bash
nvidia-smi topo -m
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ko/perf_train_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ IPEX 릴리스는 PyTorch를 따라갑니다. pip를 통해 설치하려면:
| 1.11 | 1.11.200+cpu |
| 1.10 | 1.10.100+cpu |

```
```bash
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```

Expand Down
6 changes: 3 additions & 3 deletions docs/source/ko/perf_train_cpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
| 1.11.0 | |||||
| 1.10.0 ||||| |

```
```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
```
`{pytorch_version}`은 1.13.0과 같이 PyTorch 버전을 나타냅니다.
Expand All @@ -57,13 +57,13 @@ PyTorch 1.12.1은 oneccl_bindings_for_pytorch 1.12.10 버전과 함께 사용해
oneccl_bindings_for_pytorch는 MPI 도구 세트와 함께 설치됩니다. 사용하기 전에 환경을 소스로 지정해야 합니다.

Intel® oneCCL 버전 1.12.0 이상인 경우
```
```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
```

Intel® oneCCL 버전이 1.12.0 미만인 경우
```
```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/ko/perf_train_gpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ DP와 DDP 사이에는 다른 차이점이 있지만, 이 토론과는 관련이

해당 벤치마크에서 `NCCL_P2P_DISABLE=1`을 사용하여 NVLink 기능을 비활성화했습니다.

```
```bash

# DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
Expand Down
Loading

0 comments on commit f5e09cf

Please sign in to comment.