"The constraint system offered by Guidance is extremely powerful. It can ensure that the output conforms to any context free grammar (so long as the backend LLM has full support for Guidance). More on this below." --from https://github.com/guidance-ai/guidance/
I didn't find any more on that comment below. Is there a list of supported LLMs?
We have support for Huggingface Transformers, llama.cpp, vLLM, SGLang, and TensorRT-LLM, along with some smaller providers (e.g. mistral.rs). Using any of these libraries as an inference host means you can use an OSS model with the guidance backend for full support. Most open source models will run on at least one of these backends (with vLLM probably being the most popular hosted solution, and transformers/llama.cpp being the most popular local model solutions)
We're also the backend used by OpenAI/Azure OpenAI for structured outputs on the closed source model side.
*Please note, I'm not in favor of censorship, it's just that this analogy is inaccurate
Olive Garden isn't given access to something it requires to operate at the pleasure of the government. Broadcast TV on the other hand...
All of broadcast TV is allowed because the government says it is. ABC/CBS/NBC/FOX don't own the radio spectrum they are operating on, the government does and they grant the right to use it to those companies. There's a long list of things that the government requires them to do in order to keep this pleasure. One of them used to be the Saturday morning cartoons. I miss those.
I've used Waymo countless times in SF. It's typically 15% cheaper than an Uber/Lyft and trip time/wait are generally the same. I much prefer the Waymo.
This is the model that was code named "Sonic" in Cursor last week. It received tons of praise. Then Cursor revealed it was a model from xAI. Then everyone hated it. :/ I miss the days where we just liked technology for advancement's sake.
*edit Case in point, downvotes in less than 30 seconds
I'm pretty sure everyone knew it was xAI last week. It's a great model. I'll never pay to use it, but I like it enough while it's free.
> I miss the days where we just liked technology for advancement's sake.
I think you haven't fully thought through such statements. They lead to bad places. If Bin Laden were selling research and inference to raise money for attacks, how many tokens would you buy?
People on here keep saying they would never use a Chinese model because that's allegedly America's "largest geopolitical adversary" but happily use a model from someone actively actually destroying America from within...
Same question, but I'm less clear on how we devs get paid here.
Still hoping someone builds the App Store for custom GPTs where we don't have to worry about payment and user infrastructure. Happy giving up a percentage for that butnot30percentguys.
It feels like what Custom GPTs should have been. Custom GPTs are barely able to do anything interesting beyond an initial prompt, there's no ability to modify the core user experience. The ability to run code and have it do subrequests makes this actually interesting.