Same here, had to configure ChatGPT to stop making these statements. Also had to configure bunch of other stuff to make it bland when answering questions.
The way to make AI not sound like ChatGPT is to use Claude.
I realized that's what bothered me. It's not "oh my god, they used ChatGPT." But "oh my god, they couldn't even be bothered to use Claude."
It'll still sound like AI, but 90% of the cringe is gone.
If you're going to use AI for writing, it's just basic decency to use the one that isn't going to make your audience fly into a fit of rage every ten seconds.
That being said, I feel very self conscious using emdashes in current decade ;)
That's because people didn't make it a point to performatively notice them. But e.g. macOS and iOS have been auto-inserting them for a long time now. Ditto Word.
+1. My favourite bit is when pivots are chosen randomly in quicksort, we get linearithmic expected complexity. The CLRS proof using indicator random vars was a oh-shit moment.
Btw, you can make quicksort deterministically O(n log n) if you pick the pivot point with linear median search algorithm. It's impressive how randomness lets you pick a balanced pivot, but even more impressive that you could do the same without randomness.
Its weird how object detection models are "AI" now. These models and their weird errors have been around for quite a while. The issue is vendors claiming that there is no chance of errors. Ideally you would have a 2 eyes system such that if AI has a tolerable false positive rate and have a human review. But of course, you cant fire people with AI so why would we do the sane thing.
And of course there are policy wonks who would make gun ownership a human rights issue even though its fundamentally unsafe to have such free gun ownership.
AI has also been around for quite a while. LLMs are hardly the first instance of AI we've seen, just the one that's suddenly getting all the hype. But yes, people trust it too much.
According to the article, they did have a human verify the images before sending the alert. Apparently they and the school still think they made the right call.
Definitely one of the As in the FAANG. In fact both the As have terrible recruiting practices. One is a known ghoster and given that you were ghosted after a senior level meeting tells me which one.
The author mentions simplicity in their Readme. I would be very interested to read their journey and some of the decisions they made where they preferred simplicity. More of this please !
I've been thinking of doing a series of blogs on the journey but .. it's been a journey, which is a lot to write about in full. In short, a few places where I've been able to prefer simplicity:
1. Allocators are all pretty much as simple as you can get. Most memory in the program is bump/arena allocated. There is a buddy-style heap allocator for things that are annoying to arena allocate (strings that can be edited, for example). I make heavy use of temp memory and freelists.
2. Containers are all very straight-forward, and it's definitely a feature. The example I always give here is std::map from C++. On paper, it looks great; it has very good looking properties. In practice, the implementation is a nightmare; it's slow, and has a comically large rebalancing-shaped performance cliff. My containers strive to be simple, with reasonable average and worst-case performance.
3. I wrote my own metaprogramming language instead of using C++ template metaprogramming. Writing an entire programming language sounds like the antithesis of simplicity, but in reality, having a good metaprogramming layer makes your life immeasurably easier in the long run. With strong metaprogramming capabilities, stuff like realtime debug UI and state serialization becomes nearly trivial. Once you start doing versioned data serialization in C++, you quickly realize you need a better compiler (see: protobuf, cap'n proto)
Vibe coding doesnt mean the author doesnt understand their code. Its likely that they don't want carpal tunnel from typing out trivial code and hence offload that labor to a machine.
I use it for Java. I have never used anything else and never had any performance issues on my 16GB MacBook Air.
If its Webstorm maybe its because of automatic refresh capability ? I've had perf issues with VSCode as well with autobuild enabled. Autocomplete would grind to a halt.
It’s something to do with the TypeScript engine, it must be. I can also run IntelliJ fine with a huge Java project a but it’s TypeScript projects that grind it to a halt. It’s unusable on my work PC but the performance is still poor on my home PC. It’s been a steady decline since 2023
reply