Some of you may remember GPT-3 from the summer of 2021—now considered ages ago in AI-terms—as one of OpenAI's so-called large language models (LLM). In fact, at the time, it was one of the most advanced models out there (now, it has already been surpassed by a number of newer versions, most notably ChatGPT and GPT-4). While GPT-3 has existed since 2020, it was in 2021 that OpenAI had made finetuning GPT-3 available to anyone with an OpenAI account.
Finetuning, as the word implies, comes down to the molding and adapting of an existing LLM to more accurately reflect certain characteristics of a finetuning dataset. For example, generic LLMs face criticism because—having been built on normative, homogeneous datasets—they tend to misrepresent or be insensitive to minorities (whose data usually are underrepresented in the training datasets). Why, you might ask, are we not using ChatGPT? Unfortunately, the newer versions of GPT are not open to finetuning. Hopefully this might still become possible in the near future.
...
Want to keep reading? Check out the full post here.