Generative AI - How to Fine Tune LLMs

How to tune LLMs in Generative AI Studio. Vertex AI allows you to fine-tune PaLM models for text, chat, code, and embeddings intuitively and easily.

Sascha Heyer
10 min readNov 25, 2023

With all its amazing techniques, Prompt Engineering is a fantastic method that works for many use cases. And I always recommend starting with prompt engineering before thinking about fine-tuning.

Still, fine-tuning with all its amazing benefits can be useful:

  • To reduce the context length and therefore use fewer tokens, which helps to reduce the overall costs. E.g. no examples / few shots in your prompt are needed anymore.
  • To reproduce a behavior or output that is hard to achieve with prompt engineering. This is particularly useful if you need highly consistent responses.

What are we building in this article?

Let’s face it, sarcasm, and I have a complicated relationship. More often than not, I find myself scratching my head, trying to figure out if someone’s being humorous or just plain serious.

That’s why it’s finally time to fine-tune two large language models with Vertex AI Generative AI using Google's PaLM 2 model to master the art of sarcasm classification and generation.

Fine-tuning an LLM with Vertex AI Generative AI is an easy process overall. I hope you enjoy it as much as I do.

--

--

Sascha Heyer

Hi, I am Sascha, Senior Machine Learning Engineer at @DoiT. Support me by becoming a Medium member 🙏 bit.ly/sascha-support