Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, but that is nature of LLM prompting. It does take some doing to set up the right guardrails. It's still a good starting point.

Also a trick when the LLM fights you: start from scratch, and put guardrails in your initial prompt.

LLM prompting is a bit like gradient descent in a bumpy nonconvex landscape with lots of spurious optima and saddle points -- if you constrain it to the right locality, it does a better job at finding an acceptable local optimum.



I think this is just a case of different people wanting to work differently (and that's fine).

I can only tell this is wrong because I fully understand it -- and if I fully understand it, why not just write it myself rather than fight against an LLM. If I was trying to solve something I didn't know how to do, then I wouldn't know it was wrong, and where the bug was.


That's true, except an LLM can sometimes propose a formulation that one has never thought of. In nuanced cases, there is more than one formulation that works.

For MIPs, correctness can often (not always but usually) be checked by simply flipping the binaries and checking the inequalities. Coming up the inequalities from scratch are not always straightforward so LLMs often provide good starting points. Sometimes the formulation is something specific from a paper that that one has never read. LLMs are a way to "mine" those answers (some sifting required).

I think this the mindset that is needed to get value out of LLMs -- it's not about getting perfect answers on textbook problems, but working with an assistant to explore the space quickly at a fraction of the effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: