Hacker Newsnew | past | comments | ask | show | jobs | submit | bobbylarrybobby's commentslogin

I really like that Claude feels transactional. It answers my question quickly and concisely and then shuts up. I don't need the LLM I use to act like my best friend.

I love doing a personal side project code review with claude code, because it doesn't beat around the bush for criticism.

I recently compared a class that I wrote for a side project that had quite horrible temporal coupling for a data processor class.

Gemini - ends up rating it a 7/10, some small bits of feedback etc

Claude - Brutal dismemberment of how awful the naming convention, structure, coupling etc, provides examples how this will mess me up in the future. Gives a few citations for python documentation I should re-read.

ChatGPT - you're a beautiful developer who can never do anything wrong, you're the best developer that's ever existed and this class is the most perfect class i've ever seen


This is exactly what got me to actually pay. I had a side project with an architecture I thought was good. Fed it into Claude and ChatGPT. ChatGPT made small suggestions but overall thought it was good. Claude shit all over it and after validating it's suggestions, I realized Claude was what I needed.

I haven't looked back. I just use Claude at home and ChatGPT at work (no Claude). ChatGPT at work is much worse than Claude in my experience.


Weirdly I feel like partially because of this it feels more "human" and more like a real person I'm talking to. GPT models feel fake and forced, and will yap in a way that is like they're trying to get to be my friend, but offputting in a way that makes it not work. Meanwhile claude has always had better "emotional intelligence".

Claude also seems a lot better at picking up what's going on. If you're focused on tasks, then yeah, it's going to know you want quick answers rather than detailed essays. Could be part of it.


Then why are they advertising to people that are complete opposite of you? Why couldn’t they just … ask LLM what their target audience is?

fyi in settings, you can configure chatGPT to do the same

where?

Settings > Personalization > Custom Instructions.

Here's what I use:

    WE ARE PROFESSIONALS. DO NOT FLATTER ME. BE BLUNT AND FORTHRIGHT.

Quickly and concisely? In my experience, Claude drivels on and on forever. The answers are always far longer than Gemini's, which is mostly fine for coding but annoying for planning/questions.

They do understand, that's why they're doing this. This is a fundamentally anti-fact administration — when facts aren't known, you can fabricate reality for the masses, which is what they want.

For “looking at a text file with pretty print”, try CotEditor. https://coteditor.com

That's not what the prisoner’s dilemma is.

Yeah this is more like a Pascalian Gamble [1]. If you try nothing, then you are assured to die as God wanted. If you try something, then you might live, but then God hates you.

[1] https://en.wikipedia.org/wiki/Pascal%27s_wager


It is like Pascal's Wager but has nothing to do with "what God wanted" or "God hating you"... It's more "if it doesn't work the outcome is the same anyway" (eternal oblivion in Pascal's case, certain death in this case), therefore why not give it a shot in case it does work.

Warning to anyone reading this: Pascal was NOT a self-help writer.


You are basically hitting on what has been referred to as high modernism, which promotes a level of confidence in science and technology that can only be maintained by eschewing all the inherent complexity of the world. The scientific method can really only study systems by modifying a handful of variables at a time and keeping the rest fixed, and isn't really capable of handling hundreds of interacting variables. Rather than acknowledge this limitation, high modernism embraces simplification even to the detriment of its products.

Further reading: https://en.wikipedia.org/wiki/Seeing_Like_a_State


> The scientific method can really only study systems by modifying a handful of variables at a time and keeping the rest fixed

Not true. Statistical measures of large systems are a routine thing in the natural sciences. However that's higher effort and tends to make it more difficult to communicate the results to others so it's avoided whenever possible.

Also high dimensional models carry a distinct risk of overfitting.


I don't care how good of a programmer you are, if you don't know Apple stuff (Swift, Xcode, all the random iOS/Mac app BS) you aren't making an Apple app in a weekend. Learning things is easy but still takes time, and proficiency is only earned by trying and failing a number of times — unless you're an LLM, in which case you're already proficient in everything.

At least this one can be disabled: settings > accessibility > Face ID & attention > attention aware features.

This doesn't really work for things that are already words like it's/its. I type the one i want, iOS “corrects” it to the wrong one, and even after I tap the one I typed in the prediction bar? iOS still inserts its own suggestion again.

I've been seeing this from time to time since at least 2016. As others have noted, it's more likely to happen when you type quickly or immediately after pasting your search in the url bar.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: