How to generate text: using different decoding methods for language generation with Transformers #715
Labels
ai-platform
model hosts and APIs
base-model
llm base models not finetuned for chat
chat-templates
llm prompt templates for chat models
llm
Large Language Models
llm-completions
large language models for completion tasks, e.g. copilot
llm-experiments
experiments with large language models
llm-function-calling
Function Calling with Large Language Models
llm-inference-engines
Software to run inference on large language models
llm-quantization
All about Quantized LLM models and serving
llm-serving-optimisations
Tips, tricks and tools to speedup inference of large language models
New-Label
Choose this option if the existing labels are insufficient to describe the content accurately
How to generate text: using different decoding methods for language generation with Transformers
Description:
Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Back to blog
Published: March 1, 2020
patrickvonplaten
Patrick von Platen
Note: Edited on July 2023 with up-to-date references and examples.
Introduction
In recent years, there has been an increasing interest in open-ended language generation thanks to the rise of large transformer-based language models trained on millions of webpages, including OpenAI's ChatGPT and Meta's LLaMA. The results on conditioned open-ended language generation are impressive, having shown to generalize to new tasks, handle code, or take non-text data as input. Besides the improved transformer architecture and massive unsupervised training data, better decoding methods have also played an important role.
This blog post gives a brief overview of different decoding strategies and more importantly shows how you can implement them with very little effort using the popular transformers library!
All of the following functionalities can be used for auto-regressive language generation (here a refresher). In short, auto-regressive language generation is based on the assumption that the probability distribution of a word sequence can be decomposed into the product of conditional next word distributions:
and W0 being the initial context word sequence. The length T of the word sequence is usually determined on-the-fly and corresponds to the timestep t=T the EOS token is generated from P(wt|w1:t−1,W0).
We will give a tour of the currently most prominent decoding methods, mainly Greedy search, Beam search, and Sampling.
Let's quickly install transformers and load the model. We will use GPT2 in PyTorch for demonstration, but the API is 1-to-1 the same for TensorFlow and JAX.
Greedy Search
Greedy search is the simplest decoding method. It selects the word with the highest probability as its next word:
The following sketch shows greedy search.
Starting from the word "The", the algorithm greedily chooses the next word of highest probability "nice" and so on, so that the final generated word sequence is ("The", "nice", "woman") having an overall probability of 0.5 × 0.4 = 0.2.
In the following, we will generate word sequences using GPT2 on the context ("I", "enjoy", "walking", "with", "my", "cute", "dog"). Let's see how greedy search can be used in transformers:
Output:
Alright! We have generated our first short text with GPT2 😊. The generated words following the context are reasonable, but the model quickly starts repeating itself! This is a very common problem in language generation in general and seems to be even more so in greedy and beam search - check out Vijayakumar et al., 2016 and Shao et al., 2017.
The major drawback of greedy search though is that it misses high probability words hidden behind a low probability word as can be seen in our sketch above:
The word "has" with its high conditional probability of 0.9 is hidden behind the word "dog", which has only the second-highest conditional probability, so that greedy search misses the word sequence "The", "dog", "has".
Thankfully, we have beam search to alleviate this problem!
Beam Search
Beam search reduces the risk of missing hidden high probability word sequences by keeping the most likely num_beams of hypotheses at each time step and eventually choosing the hypothesis that has the overall highest probability. Let's illustrate with num_beams=2:
At time step 1, besides the most likely hypothesis ("The", "nice"), beam search also keeps track of the second most likely one ("The", "dog"). At time step 2, beam search finds that the word sequence ("The", "dog", "has"), has with 0.36 a higher probability than ("The", "nice", "woman"), which has 0.2. Great, it has found the most likely word sequence in our toy example!
Beam search will always find an output sequence with higher probability than greedy search, but is not guaranteed to find the most likely output.
Let's see how beam search can be used in transformers. We set num_beams > 1 and early_stopping=True so that generation is finished when all beam hypotheses reached the EOS token.
Output:
While the result is arguably more fluent, the output still includes repetitions of the same word sequences. One of the available remedies is to introduce n-grams (a.k.a word sequences of n words) penalties as introduced by Paulus et al. (2017) and Klein et al. (2017). The most common n-grams penalty makes sure that no n-gram appears twice by manually setting the probability of next words that could create an already seen n-gram to 0.
Let's try it out by setting no_repeat_ngram_size=2 so that no 2-gram appears twice:
Output:
Output:
As can be seen, the five beam hypotheses are only marginally different from each other - which should not be too surprising when using only 5 beams.
In open-ended generation, a couple of reasons have been brought forward why beam search might not be the best possible option:
Beam search can work very well in tasks where the length of the desired generation is more or less predictable as in machine translation or summarization - see Murray et al. (2018) and Yang et al. (2018). But this is not the case for open-ended generation where the desired output length can vary greatly, e.g. dialog and story generation.
We have seen that beam search heavily suffers from repetitive generation. This is especially hard to control with n-gram- or other penalties in story generation since finding a good trade-off between inhibiting repetition and repeating cycles of identical n-grams requires a lot of finetuning.
As argued in Ari Holtzman et al. (2019), high-quality human language does not follow a distribution of high probability next words. In other words, as humans, we want generated text to surprise us and not to be boring/predictable. The authors show this nicely by plotting the probability, a model would give to human text vs. what beam search does.
So let's stop being boring and introduce some randomness 🤪.
Sampling
In its most basic form, sampling means randomly picking the next word wt according to its conditional probability distribution:
Taking the example from above, the following graphic visualizes language generation when sampling.
It becomes obvious that language generation using sampling is not deterministic anymore. The word ("car") is sampled from the conditioned probability distribution P(w|"The"), followed by sampling ("drives") from P(w|"The", "car").
In transformers, we set do_sample=True and deactivate Top-K sampling (more on this later) via top_k=0. In the following, we will fix the random seed for illustration purposes. Feel free to change the set_seed argument to obtain different results, or to remove it for non-determinism.
Output:
Interesting! The text seems alright - but when taking a closer look, it is not very coherent and doesn't sound like it was written by a human. That is the big problem when sampling word sequences: The models often generate incoherent gibberish, cf. Ari Holtzman et al. (2019).
A trick is to make the distribution P(w|w1:t−1) sharper (increasing the likelihood of high probability words and decreasing the likelihood of low probability words) by lowering the so-called temperature of the softmax.
An illustration of applying temperature to our example from above could look as follows.
The conditional next word distribution of step t=1 becomes much sharper leaving almost no chance for word ("car") to be selected.
Let's see how we can cool down the distribution in the library by setting temperature=0.6:
Output:
OK. There are fewer weird n-grams and the output is a bit more coherent now! While applying temperature can make a distribution less random, in its limit, when setting temperature → 0, temperature scaled sampling becomes equal to greedy decoding and will suffer from the same problems as before.
Top-K Sampling
Fan et. al (2018) introduced a simple, but very powerful sampling scheme, called Top-K sampling. In Top-K sampling, the K most likely next words are filtered and the probability mass is redistributed among only those K next words. GPT2 adopted this sampling scheme, which was one of the reasons for its success in story generation.
We extend the range of words used for both sampling steps in the example above from 3 words to 10 words to better illustrate Top-K sampling.
Having set K=6, in both sampling steps we limit our sampling pool to 6 words. While the 6 most likely words, defined as Vtop-K encompass only ca. two-thirds of the whole probability mass in the first step, it includes almost all of the probability mass in the second step. Nevertheless, we see that it successfully eliminates the rather weird candidates ("not", "the", "small", "told") in the second sampling step.
Let's see how Top-K can be used in the library by setting top_k=50:
Output:
Not bad at all! The text is arguably the most human-sounding text so far. One concern though with Top-K sampling is that it does not dynamically adapt the number of words that are filtered from the next word probability distribution P(w|w1:t−1). This can be problematic as some words might be sampled from a very sharp distribution (distribution on the right in the graph above), whereas others from a much more flat distribution (distribution on the left in the graph above).
In step t=1, Top-K eliminates the possibility to sample ("people", "big", "house", "cat"), which seem like reasonable candidates. On the other hand, in step t=2 the method includes the arguably ill-fitted words ("down", "a") in the sample pool of words. Thus, limiting the sample pool to a fixed size K could endanger the model to produce gibberish for sharp distributions and limit the model's creativity for flat distribution. This intuition led Ari Holtzman et al. (2019) to create Top-p- or nucleus-sampling.
Top-p (nucleus) Sampling
Instead of sampling only from the most likely K words, in Top-p sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability p. The probability mass is then redistributed among this set of words. This way, the size of the set of words (a.k.a the number of words in the set) can dynamically increase and decrease according to the next word's probability distribution. Ok, that was very wordy, let's visualize.
Having set p=0.92, Top-p sampling picks the minimum number of words to exceed together p=92% of the probability mass, defined as Vtop-p. In the first example, this included the 9 most likely words, whereas it only has to pick the top 3 words in the second example to exceed 92%. Quite simple actually! It can be seen that it keeps a wide range of words where the next word is arguably less predictable, e.g. P(w|"The"), and only a few words when the next word seems more predictable, e.g. P(w|"The", "car").
Alright, time to check it out in transformers! We activate Top-p sampling by setting 0 < top_p < 1:
Output:
Great, that sounds like it could have been written by a human. Well, maybe not quite yet.
While
Suggested labels
{'label-name': 'language-generation-decoding-methods', 'label-description': 'Overview of decoding methods for language generation with Transformers', 'confidence': 55.01}
The text was updated successfully, but these errors were encountered: