> Let's look at the data: 72% of enterprises are now fine-tuning models rather than just using RAG (22%) or building custom models from scratch (6%). This isn't a trend, it's because fine-tuning works when other approaches fail.
Where did that data come from? My mental model is still that most companies find fine-tuning an LLM isn't worth the effort compared to promoting with better chosen examples or setting up effective RAG. Am I out of date?
On reading further: it looks like this series of posts is specifically about building voice assistants that run on a mobile phone, which need TINY models. From what I understand getting tiny models to perform interesting custom tasks is a challenge that fine-tuning is well suited for.
They surveyed Fortune 500 types for it. The numbers above were from a survey of 70 "AI decision makers" and the question concerned "How are enterprises customizing their models?"
Where did that data come from? My mental model is still that most companies find fine-tuning an LLM isn't worth the effort compared to promoting with better chosen examples or setting up effective RAG. Am I out of date?
On reading further: it looks like this series of posts is specifically about building voice assistants that run on a mobile phone, which need TINY models. From what I understand getting tiny models to perform interesting custom tasks is a challenge that fine-tuning is well suited for.