Skip to content

Commit 2d74c93

Browse files
julien-cmishig25
andauthored
No more magic comments (huggingface#1554)
* no more magic comments * Also replace h1 by actual markdown * nit: remove extra space * Fix remaining <h1>s * handling complex h1 heading * Update README.md cc @mishig25 --------- Co-authored-by: Mishig Davaadorj <dmishig@gmail.com>
1 parent ef9aed0 commit 2d74c93

File tree

384 files changed

+115
-879
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

384 files changed

+115
-879
lines changed

1b-sentence-embeddings.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ authors:
77

88
# Train a Sentence Embedding Model with 1 Billion Training Pairs
99

10-
<!-- {blog_metadata} -->
11-
<!-- {authors} -->
1210

1311
**Sentence embedding** is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as clustering, text mining, or question answering.
1412

3d-assets.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ authors:
77

88
# Practical 3D Asset Generation: A Step-by-Step Guide
99

10-
<!-- {blog_metadata} -->
11-
<!-- {authors} -->
1210

1311
## Introduction
1412

4bit-transformers-bitsandbytes.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ authors:
1313

1414
# Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
1515

16-
<!-- {blog_metadata} -->
17-
<!-- {authors} -->
1816

1917
LLMs are known to be large, and running or training them in consumer hardware is a huge challenge for users and accessibility.
2018
Our [LLM.int8 blogpost](https://huggingface.co/blog/hf-bitsandbytes-integration) showed how the techniques in the [LLM.int8 paper](https://arxiv.org/abs/2208.07339) were integrated in transformers using the `bitsandbytes` library.

Llama2-for-non-engineers.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,6 @@ authors:
66
- user: abhishek
77
---
88

9-
<!-- {blog_metadata} -->
10-
<!-- {authors} -->
119

1210
# Non-engineers guide: Train a LLaMA 2 chatbot
1311

README.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,19 +29,15 @@ authors:
2929
3030
# Train your first Decision Transformer
3131
32-
<!-- {blog_metadata} -->
33-
<!-- {authors} -->
34-
3532
Your content here [...]
3633
```
3734

38-
The blog_metadata and authors HTML comments are meant to mark where in the file will be inserted the following UI elements:
35+
When published, the Hub will insert the following UI elements right after the blogpost's main header (i.e. the line that starts with a single `#`, aka. the `<h1>`):
36+
3937
- "Published on [date]"
4038
- "Update on GitHub" button
4139
- avatars of the authors that were listed in authors.
4240

43-
⚠️ Please keep the blog_metadata and authors comments exactly equal to those strings otherwise they won't be replaced.
44-
4541
5️⃣ Then, you can add your content. It's markdown system so if you wrote your text on notion just control shift v to copy/paste as markdown.
4642

4743
6️⃣ Modify `_blog.yml` to add your blogpost.

accelerate-deepspeed.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,8 @@ authors:
66
- user: sgugger
77
---
88

9-
<h1>Accelerate Large Model Training using DeepSpeed</h1>
9+
# Accelerate Large Model Training using DeepSpeed
1010

11-
<!-- {blog_metadata} -->
12-
<!-- {authors} -->
1311

1412
In this post we will look at how we can leverage the **[Accelerate](https://github.com/huggingface/accelerate)** library for training large models which enables users to leverage the ZeRO features of **[DeeSpeed](https://www.deepspeed.ai)**.
1513

accelerate-large-models.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ authors:
77

88
# How 🤗 Accelerate runs very large models thanks to PyTorch
99

10-
<!-- {blog_metadata} -->
11-
<!-- {authors} -->
1210

1311
## Load and run large models
1412

accelerate-library.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ authors:
77

88
# Introducing 🤗 Accelerate
99

10-
<!-- {blog_metadata} -->
11-
<!-- {authors} -->
1210

1311
## 🤗 Accelerate
1412

accelerate-transformers-with-inferentia2.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ authors:
88

99
# Accelerating Hugging Face Transformers with AWS Inferentia2
1010

11-
<!-- {blog_metadata} -->
12-
<!-- {authors} -->
1311

1412
<script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
1513

accelerated-inference.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,8 @@ title: "How we sped up transformer inference 100x for 🤗 API customers"
33
thumbnail: /blog/assets/09_accelerated_inference/thumbnail.png
44
---
55

6-
<h1>How we sped up transformer inference 100x for 🤗 API customers</h1>
6+
# How we sped up transformer inference 100x for 🤗 API customers
77

8-
<!-- {blog_metadata} -->
98

109
🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in.
1110

0 commit comments

Comments
 (0)