> If you tools are not updated that isn't the fault of C++.
It kinda is. The C++ committee has been getting into a bad habit of dumping lots of not-entirely-working features into the standard and ignoring implementer feedback along the way. See https://wg21.link/p3962r0 for the incipient implementer revolt going on.
Even some much simpler things are extremely half baked. For example, here’s one I encountered recently:
alignas(16) char buf[128];
What type is buf? What alignment does that type have? What alignment does buf have? Does the standard even say that alignof(buf) is a valid expression? The answers barely make sense.
Given that this is the recommended replacement for aligned_storage, it’s kind of embarrassing that it works so poorly. My solution is to wrap it in a struct so that at least one aligned type is involved and so that static_assert can query it.
Its happening again with contracts. Implementers are raising implementability objections that are being completely ignored. Senders and receivers are being claimed to work great on a GPU but without significant testing (there's only one super basic cuda implementation), and even a basic examination shows that they won't work well
So many features are starting to land which feel increasingly DoA, we seriously need a language fork
Brand value is definitely a moat. Not the deepest of moats, but it is a moat nonetheless.
> It's pure logic that Tesla has to pursue bets that would justify billion dollar valuations and being a car company isn't that.
Tesla is valued as if it is a tech company with a car business as a side gig. Its balance sheet is a car business, and I'm not even sure it spends enough on tech to have tech qualify as a side gig. And the other tech avenues it has been pursuing (autonomous vehicles, humanoid robots) are areas that other people have been doing for better and longer. Hell, Honda had autonomous (not tele-operated) humanoid robots working 20 years ago.
To be honest, at this point, I mostly consider the other bets that Tesla is pursing are just passion projects to keep the stock price artificially high. Were Tesla more realistically valued, it would lose probably 90% or more of its value, and Musk would be a much poorer man.
The basic rule of writing your own cross-thread datastructures like mutexes or condition variables is... don't, unless you have very good reason not to. If you're in that rare circumstance where you know the library you're using isn't viable for some reason, then the next best rule is to use your OS's version of a futex as the atomic primitive, since it's going to solve most of the pitfalls for you automatically.
The only time I've manually written my own spin lock was when I had to coordinate between two different threads, one of which was running 16-bit code, so using any library was out of the question, and even relying on syscalls was sketchy because making sure the 16-bit code is in the right state to call a syscall itself is tricky. Although in this case, since I didn't need to care about things like fairness (only two threads are involved), the spinlock core ended up being simple:
As always: use standard libraries first, profile, then write your own if the data indicate that it's necessary. To your point, the standard library probably already uses the OS primitives under the hood, which themselves do a short userspace spin-wait and then fall back to a kernel wait queue on contention. If low latency is a priority, the latter might be unacceptable.
The following is an interesting talk where the author used a custom spinlock to significantly speed up a real-time physics solver.
> which themselves do a short userspace spin-wait and then fall back to a kernel wait queue on contention.
Yes, but sadly not all implementations... The point remains that you should prefer OS primitives when you can, profile first, reduce contention, and then only, maybe, if you reeeally know what you're doing, on a system you mostly know and control, then perhaps you may start doing it yourself. And if you do, the fallback under contention must be the OS primitive
Another time when writing a quick and dirty spinlock is reasonable is inside a logging library. A logging library would normally use a full-featured mutex, but what if we want the mutex implementation to be able to log? Say the mutex can log that it is non recursive yet the same thread is acquiring it twice; or that it has detected a deadlock. The solution is to introduce a special subset of the logging library to use a spinlock.
I wrote my own spin lock library over a decade ago in order to learn about multi threading, concurrency, and how all this stuff works. I learned a lot!
Another somewhat known case of a spinlock is in trading, where for latency purposes the OS scheduler is essentially bypassed by core isolation and thread pinning, so there’s nothing better for the CPU to do than spinning.
This is the primary use case for spinlocks, which is why the vast majority of developers shouldn't use them. When you use a spinlock, you're dedicating an entire CPU core to the thread or else it doesn't work in terms of correctness or performance.
If you want scheduling, then the scheduler needs to be aware of task dependencies and you must accept that your task will be interrupted.
When a lock is acquired on resource A by the first thread, the second thread that tries to acquire A will have a dependency on the release of A, meaning that it can only be scheduled after the first thread has left the critical section. With a spinlock, the scheduler is not informed of the dependency and thinks that the spinlock is performing real work, which is why it will reschedule waiting threads even if resource A has not been released yet.
If you do thread pinning and ensure there are less threads than CPU cores, but still have other threads be scheduled on those cores, it might still work, but the latency benefits are most likely gone.
The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
I run a small print-on-demand platform and this is exactly what we're seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We're pretty much back to manual review which destroys the unit economics.
Your comments of trust reminded me of an analysis of the PGP strongly-connected component of the "web of trust," back when keyservers were a bigger thing, and essentially found that, in practice, "web of trust" turned out to have a lot of key nodes that look very much like CAs in Web PKI.
That is, for as much as a lot of cryptoenthusiasts want to talk about decentralizing trust and empowering users to have fine-grained trust decisions, in practice, most users really just want to offload all of the burden of ensuring someone is trustworthy on somebody else.
Yeah, very much in the same vein. Someone should probably produce a pithy phrase for “if you think you have a web of trust, you probably have an informal, underspecified central authority.”
I really shouldn't be so gobsmacked by people's ignorance of history, but it is astounding to me the number of replies here that seem to believe that the press really was well-behaved in this time period. When learning about the Spanish-American War, pretty much the most important bullet point covered in history class is the role of the press in essentially inventing the cause of the war, as exemplified by the infamous quote from a newspaper baron: "You furnish the pictures and I'll furnish the war."
The general term to look up is "yellow journalism."
There is utility for having a reserved set of opcode space for "NOP if you don't know what the semantics are, but later ISAs may attach semantics for it," because this allows you to add various instructions that merely do nothing on processors that don't support them. The ENDBR32/ENDBR64 instructions for CET, XACQUIRE/XRELEASE hints for LOCK, the MPX instructions, the PREFETCH instructions all use reserved NOP space (0F0D and 0F18-0F1F opcode space).
This is true, but the encoding space reserved for future extensions that is interpreted as NOP should be many times smaller than the space for encodings that generate the invalid instruction exception.
The reason is that the number of useful instructions that are only performance hints or checks for some exceptional conditions, so that if they are ignored that does not have bad consequences, is very limited.
For the vast majority of instruction set extensions, not executing the new instructions completely changes the behavior of the program, which is not acceptable, so the execution of such programs must be prevented on older CPUs.
Regarding the order of prefixes, Intel has made mistakes in not specifying it initially in 8086 and in allowing redundant prefixes. The latter has been partially corrected in later CPUs by imposing a limit for the instruction length.
Because of this lack of specification, the various compilers and assemblers have generated any instruction formats that were accepted by an 8088, so it became impossible to tighten the specification.
However, what is really weird is why Intel and AMD have continued to accept incrorrect instruction encodings even after later extensions of the ISA that clearly specified only a certain encoding to be valid, but in reality the CPUs also accept other encodings and now there are programs that use those alternative encodings that were supposed to be invalid.
The prefix structure has been enforced starting with the VEX prefixes (which is a lot later than it should have; AMD made a mistake not enforcing more rules around the REX prefix). The legacy prefixes are of course an unfixable mess because of legacy.
You do have some influence in where you get posted, but that is mediated by several factors including most importantly how desirable a posting is.
While I'm not certain of how desirable an Arctic posting is, I do know that Antarctic postings are heavily oversubscribed (more demand than spots available), and I rather suspect that the Arctic stuff is in the same boat. It's not like there's a huge amount of spots that need to be filled, so even if getting an "Arctic soldier" tab appeals to only like 1% of the soldiers, well, that's enough to fill all the slots.
From what I understand of the advertising market, companies like Google and Facebook make bucketloads of ads primarily because they own so much of the vertical integration of ad markets. Meanwhile, the way OpenAI appears to be integrating ads makes it seem to me that they're positioned only to take the smallest slice of the pie--a place to hoist ads--which means they're revenue-per-user I would estimate to be a lot closer to, say, a newspaper website than the biggest of social media sites, or maybe along the lines of Twitter or Tumblr, which never posted spectacular profits.
It kinda is. The C++ committee has been getting into a bad habit of dumping lots of not-entirely-working features into the standard and ignoring implementer feedback along the way. See https://wg21.link/p3962r0 for the incipient implementer revolt going on.
reply