I’m familiar with local models. They’re fine for chatting on unimportant things.
They do not compare to the giant models like Claude Sonnet and GPT4 when it comes to trying to use them for complex things.
I continue to use both local models and the commercial cloud offerings, but I think anyone who suggests that the small local models are on par with the big closed hosted models right now is wishful thinking.
They do not compare to the giant models like Claude Sonnet and GPT4 when it comes to trying to use them for complex things.
I continue to use both local models and the commercial cloud offerings, but I think anyone who suggests that the small local models are on par with the big closed hosted models right now is wishful thinking.