The looks of ChatGPT in 2022 utterly modified how the world began perceiving synthetic intelligence. The unbelievable efficiency of ChatGPT led to the fast growth of different highly effective LLMs.
We may roughly say that ChatGPT is an upgraded model of GPT-3. However compared to the earlier GPT variations, this time OpenAI builders not solely used extra knowledge or simply advanced mannequin architectures. As an alternative, they designed an unbelievable approach that allowed a breakthrough.
On this article, we’ll speak about RLHF — a basic algorithm carried out on the core of ChatGPT that surpasses the boundaries of human annotations for LLMs. Although the algorithm is predicated on proximal coverage optimization (PPO), we’ll hold the reason easy, with out going into the main points of reinforcement studying, which isn’t the main focus of this text.
NLP growth earlier than ChatGPT
To higher dive into the context, allow us to remind ourselves how LLMs have been developed prior to now, earlier than ChatGPT. Typically, LLM growth consisted of two phases:
Pre-training contains language modeling — a process through which a mannequin tries to foretell a hidden token within the context. The chance distribution produced by the mannequin for the hidden token is then in comparison with the bottom fact distribution for loss calculation and additional backpropagation. On this approach, the mannequin learns the semantic construction of the language and the that means behind phrases.
If you wish to be taught extra about pre-training & fine-tuning framework, take a look at my article about BERT.
After that, the mannequin is fine-tuned on a downstream process, which could embody totally different targets: textual content summarization, textual content translation, textual content technology, query answering, and so forth. In lots of conditions, fine-tuning requires a human-labeled dataset, which ought to ideally comprise sufficient textual content samples to permit the mannequin to generalize its studying properly and keep away from overfitting.
That is the place the boundaries of fine-tuning seem. Information annotation is normally a time-consuming process carried out by people. Allow us to take a question-answering process, for instance. To assemble coaching samples, we would wish a manually labeled dataset of questions and solutions. For each query, we would wish a exact reply supplied by a human. For example:

In actuality, for coaching an LLM, we would wish hundreds of thousands and even billions of such (query, reply) pairs. This annotation course of may be very time-consuming and doesn’t scale properly.
RLHF
Having understood the principle drawback, now it’s good second to dive into the main points of RLHF.
In case you have already used ChatGPT, you’ve in all probability encountered a scenario through which ChatGPT asks you to decide on the reply that higher fits your preliminary immediate:

This data is definitely used to repeatedly enhance ChatGPT. Allow us to perceive how.
To begin with, it is very important discover that selecting one of the best reply amongst two choices is a a lot less complicated process for a human than offering a precise reply to an open query. The thought we’re going to have a look at is predicated precisely on that: we would like the human to only select a solution from two potential choices to create the annotated dataset.

Response technology
In LLMs, there are a number of potential methods to generate a response from the distribution of predicted token possibilities:
- Having an output distribution p over tokens, the mannequin at all times deterministically chooses the token with the very best chance.

- Having an output distribution p over tokens, the mannequin randomly samples a token in keeping with its assigned chance.

This second sampling technique leads to extra randomized mannequin habits, which permits the technology of numerous textual content sequences. For now, allow us to suppose that we generate many pairs of such sequences. The ensuing dataset of pairs is labeled by people: for each pair, a human is requested which of the 2 output sequences suits the enter sequence higher. The annotated dataset is used within the subsequent step.
Within the context of RLHF, the annotated dataset created on this approach known as “Human Suggestions”.
Reward Mannequin
After the annotated dataset is created, we use it to coach a so-called “reward” mannequin, whose objective is to be taught to numerically estimate how good or dangerous a given reply is for an preliminary immediate. Ideally, we would like the reward mannequin to generate optimistic values for good responses and detrimental values for dangerous responses.
Talking of the reward mannequin, its structure is precisely the identical because the preliminary LLM, apart from the final layer, the place as a substitute of outputting a textual content sequence, the mannequin outputs a float worth — an estimate for the reply.
It’s essential to move each the preliminary immediate and the generated response as enter to the reward mannequin.
Loss perform
You may logically ask how the reward mannequin will be taught this regression process if there should not numerical labels within the annotated dataset. This can be a affordable query. To handle it, we’re going to use an fascinating trick: we’ll move each and a foul reply by way of the reward mannequin, which is able to in the end output two totally different estimates (rewards).
Then we’ll well assemble a loss perform that may examine them comparatively.

Allow us to plug in some argument values for the loss perform and analyze its habits. Beneath is a desk with the plugged-in values:

We are able to instantly observe two fascinating insights:
- If the distinction between R₊ and R₋ is detrimental, i.e. a greater response obtained a decrease reward than a worse one, then the loss worth will probably be proportionally giant to the reward distinction, that means that the mannequin must be considerably adjusted.
- If the distinction between R₊ and R₋ is optimistic, i.e. a greater response obtained a better reward than a worse one, then the loss will probably be bounded inside a lot decrease values within the interval (0, 0.69), which signifies that the mannequin does its job properly at distinguishing good and dangerous responses.
A pleasant factor about utilizing such a loss perform is that the mannequin learns applicable rewards for generated texts by itself, and we (people) should not have to explicitly consider each response numerically — simply present a binary worth: is a given response higher or worse.
Coaching an unique LLM
The skilled reward mannequin is then used to coach the unique LLM. For that, we will feed a sequence of latest prompts to the LLM, which is able to generate output sequences. Then the enter prompts, together with the output sequences, are fed to the reward mannequin to estimate how good these responses are.
After producing numerical estimates, that data is used as suggestions to the unique LLM, which then performs weight updates. A quite simple however elegant method!

More often than not, within the final step to regulate mannequin weights, a reinforcement studying algorithm is used (normally performed by proximal coverage optimization — PPO).
Even when it’s not technically right, if you’re not accustomed to reinforcement studying or PPO, you may roughly consider it as backpropagation, like in regular machine studying algorithms.
Inference
Throughout inference, solely the unique skilled mannequin is used. On the identical time, the mannequin can repeatedly be improved within the background by gathering person prompts and periodically asking them to price which of two responses is healthier.
Conclusion
On this article, we now have studied RLHF — a extremely environment friendly and scalable approach to coach fashionable LLMs. A sublime mixture of an LLM with a reward mannequin permits us to considerably simplify the annotation process carried out by people, which required enormous efforts prior to now when performed by way of uncooked fine-tuning procedures.
RLHF is used on the core of many in style fashions like ChatGPT, Claude, Gemini, or Mistral.
Sources
All pictures until in any other case famous are by the writer