Hacker Newsnew | past | comments | ask | show | jobs | submit | beaker52's commentslogin

Sleep easy fellow earthling, there’s a new Ai in town now.

> because of antipatterns that don’t apply anymore, such as always starting a new chat

I’m keen to understand your reasoning on this. I don’t agree, but maybe I’m just stuck with old practices, so help me?

What’s your justification as to why starting a new chat is an antipattern?


It used to be that the bots had a short context window, and they struggled with getting confused by past context, so it was much better to make a new chat every now and then to keep the thread on track.

The opposite is true now. The context windows are enormous, and the bots are able to stay on task extremely well. They're able to utilize any previous context you've provided as part of the conversation for the new task, which improves their performance.

The new pattern I am using is a master chat that I only ever change if I am doing something entirely different


That’s cool. I know context windows are arbitrarily larger now because consumers think that larger window = better, but I think the sentiment that the model can’t even use the window effectively still stands?

I still find LLMs perform best with a potent and focussed context to work with, and performance goes down quite significantly the more context it has.

What’s your experience been?


I worked on a startup experimenting with using gemini-2.0-flash (the year old model) using its full 1m context window to query technical documents. We found it to be extremely successful at needle-in-a-haystack type problems.

As we migrated to newer models (gemini-3.0 and the o4-mini models) we again found it performed even better with x00k tokens. Our system prompt grew to about 20k tokens and the bots were able to handle it perfectly. Our issue became time to first token with large context, rather than the bot quality.

The ultra large 1m+ llama models were reported to be ineffective at >1m context. But at this point, it becomes so cost prohibitive to use anyway.

I am continuing to have success using Cursor's Auto model, and GPT-5.1 with extremely long conversations. I use different chats for different problems moreso for my own compartmentalisation of thoughts, rather than as a necessity for the bot.


(Breaking the 4th wall for a minute):

It’s not just Simon that we’re getting less of, it’s YOU we’re getting less of too. And we want you around. Don’t go.


Only it’s a bit like me getting back into cooking because I described the dish I want to a trainee cook.

Depends on how you're using the LLMs. It can also be like having someone else around to chop the onions, wash the pans and find the ingredients when you need them.

The head chefs at most restaurants delegate the majority of details of dishes to their kitchen staff, then critique and refine.

This approach seems to have worked out for both Warhol and Chihuly.

As long as you get the dish you want when before you couldn’t have it — who cares?

Sure, as long as you don’t expect me to digest it, live with it, and crap it out for you, I see no problem with it.

My expectations don’t change whether or not I’m using AI, and neither do my standards.

Whether or not you use my software is up to you.


So you're saying that if you go to any famous restaurant and the famous face of the restaurant isn't personally preparing your dinner with their hands and singular attention, you are disappointed.

Got it.


Are you even cooking if you did not collect your own ingredients and forge your own tools??

Isn't that still considered cooking? If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did cook it.

Work harder!

Now I’m a life coach because I’m responsible for your promotion.


Ok, maybe my analogy wasn't the best. But the point I was trying to make is that using AI tools to write code doesn't meant you didn't write the code.

Very apt analogy. I'm still waiting for my paycheck.

> If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.

The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.

Otherwise you could say you "cooked a meal" every time you went to MacDonald's.


Why is the head chef called the head chef, then? He doesn’t “cook”.

To differentiate him from the "cook", which is what we call those who carry out the actual act of cooking.

Well, don’t go around calling me a compiler!

If that's what you do, then the name is perfectly apt. Why shy away from what you are?

The difference is that the head chef can cook very well and could do a better job of the dish than the trainee.

"head chef" is a managerial position but yes often they can and do cook.

I would argue that you technically did not cook it yourself - you are however responsible for having cooked it. You directed the cooking.

Flipping toggle switches went out of fashion many, many, many years ago. We've been describing to trainees (compilers) the dish we want for longer than most on HN have been alive.

Actually, we’ve been formally declaring the logic of programs to compilers, which is something very different.

(Replying to myself because hn)

That’s not the only difference at all. A good use of an LLM might be to ask it what the difference between using an LLM and writing code for a compiler is.


Equally a good use for a legacy compiler that compiles a legacy language. Granted, you are going to have to write a lot more boilerplate to see it function (that being the difference, after all), but the outcome will be the same either way. It's all just 1s and 0s at the end of the day.

Sorry friend, if you can’t identify the important differences between a compiler and an LLM, either intentionally or unintentionally (I can’t tell), then I must question the value of whatever you have to say on the topic.

The important difference is the reduction in boilerplate, which allows programs to be written with (often) significantly less code. Hence the time savings (and fun) spoken of in the original article.

This isn't really a new phenomenon. Languages have been adding things like arrays and maps as builtins to reduce the boilerplate required around them. The modern languages of which we speak take that same idea to a whole new level, but such is the nature of evolution.


No, when we write code it has a an absolute and specific meaning to the compiler. When we write words to an LLM they are written in a non-specific informal language (usually English) and processed non-deterministically too. This is an incredibly important distinction that makes coding, and asking the LLM to code, two completely different ball games. One is formal, one is not.

And yes, this isn’t a new phenomenon.


It's different in some ways (such is evolution), but is not a distinction that matters. Kind of like the difference between imperative and declarative programming. Different language models, but all the same at the end of the day.

I hope you are joking.

The only other difference mentioned is in implementation, but concepts are not defined by implementation. Obviously you could build a C compiler with neural nets. That wouldn't somehow magically turn everything into something completely different just because someone used a 'novel' approach inside of the black box.

The only difference is that newer languages have figured out how to remove a lot of the boilerplate.

Location: London, UK

Remote: Yes*

Willing to relocate: No

Technologies: [Recently] Go, TypeScript, AWS

Résumé/CV: https://www.dropbox.com/scl/fi/tiw5f2bcxp66kzanvvzqi/lukebar... (pw: hackernews)

Email: first @ firstlast co uk

I’m Luke. I’m a high-leverage, big-picture, generalist software engineer w/ 18 years experience, looking for Staff/Principal level roles with broad responsibility where building+improving the engineering org through processes, practices, coaching, and strategic initiatives is a primary responsibility. I can work on any stack, on any kind of problem. The more dimensions the problem has, the more value I’ll bring. My last name is Barton.

I’m open to working with small companies looking to scale (or build something that will scale) and larger companies looking to cultivate a stronger engineering practice.

*I’m open to remote, but I’m a believer that our best work is often done working together in three dimensions, so I’ll be very keen to understand how you’re solving the challenges of working together remotely.


To me it sounds like working with a publisher squeezed every drop of fun from the project for the author and freeing up the project could re-inject some personal excitement, motivation and intention again.


Maybe, but it sure reads like he was dragging his feet long before that.

Considering all the confusion and questions in this comment section, maybe he should have been more open to an editor.


You might be surprised at how much you’re willing to surrender if someone gave you some time to come to terms with it.

It’s just a question of giving you enough time to move on from anger/shock/fear to toward acceptance. It’s like magic and is used all the time.

> Nah, that doesn’t work when…

Sounds like it could be another well known stage of the process called denial. Denial is when you tell yourself that something isn’t possible which makes you feel safer, when in fact you’re already moving toward acceptance - acceptance that you’re going to leave, or pay the price.


It feels like someone trying to kick the starter on a bike, but it won’t start.


One could argue that staying in one place unchanged, in a space barred with thin desires, is akin to being imprisoned. And that following newly cultivated thick desires out of one’s thin prison sounds just like liberation to me.


5+ hours. It's amusing to reflect on all the "leaders" I've seen jumping on people's heads because a single feature of some unknown product was unavailable for 30 minutes.


The outrage over, for example, https://github.com/pypa/setuptools/issues/4910 was far more swift.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: