Also, "why these 5 in particular" is definitely not obvious -- there are a great many possible "obvious in some sense but also true in an important way" epigrams to choose from (the Perlis link from another comment has over a hundred). That Pike picked these 5 to emphasise tells you something about his view of programming, and doubly so given that they are rather overlapping in what they're talking about.
If the team is that small and working on things that are that disparate, then it is also very vulnerable to one of those people leaving, at which point there's a whole part of the project that nobody on the team has a good understanding of.
Having somebody else devote enough time to being up to speed enough to do code review on an area is also an investment in resilience so the team isn't suddenly in huge difficulty if the lone expert in that area leaves. It's still a problem, but at least you have one other person who's been looking at the code and talking about it with the now-departed expert, instead of nobody.
A fairly large category of the flaky CI jobs I see is "dodgy infrastructure". For instance one recurring type for our project is one I just saw fail this afternoon, where a gitlab CI runner tries to clone the git repo from gitlab itself and gets an HTTP 502 error. We've also had issues with "the s390 VM that does CI job running is on an overloaded host, so mostly it's fine but occasionally the VM gets starved of CPU and some of the tests time out".
We do also have some genuinely flaky tests, but it's pretty tempting to hit the big "just retry" button when there's all this flakiness we can't control mixed in there.
If you're going to set a firm "no AI" policy, then my inclination would be to treat that kind of PR in the same way the US legal system does evidence obtained illegally: you say "sorry, no, we told you the rules and so you've wasted effort -- we will not take this even if it is good and perhaps the only sensible implementation". Perhaps somebody else will eventually re-implement it later without looking at the AI PR.
How funny would it be if the path to actually implement that thing is then cut off because of a PR that was submitted with the exact same patch. I'm honestly sitting here grinning at the absurdity demonstrated here. Some things can only be done a certain way. Especially when you're working with 3rd party libraries and APIs. The name of the function is the name of the function. There's no walking around it.
It follows the same reasoning as when someone purposefully copies code from a codebase into another where the license doesn't allow.
Yes it might be the only viable solution, and most likely no one will ever know you copied it, but if you get found out most maintainers will not merge your PR.
That's why I said "somebody else, without looking at it". Clean-room reimplementation, if you like. The functionality is not forever unimplementable, it is only not implementable by merging this AI-generated PR.
It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine.
You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.
Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.
Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.
1) Most people still don't use TDD, which absolutely solves much of this.
2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.
3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.
4) Most people ask it to do too much and then get disappointed when it screws up.
Perfect example:
> you can't trust anything they put out
Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.
> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint
Even if there isn't any 3rd party code, the whole process of going through the codebase to confirm there really isn't any 3rd party code, and generally getting the legal department to sign off on it, is a lot of work in itself. My impression is that this kind of "historic source" release typically only happens if somebody sufficiently senior in the company cares enough to actively push it through. The default is that nobody does care that much, and it doesn't happen.
"Do nothing" has essentially zero downside for a big company that happens to have something of niche interest like this in its vaults.
third-party code is one thing, political correctness is another. What was acceptable in 90s brogrammer culture may not be considered acceptable by PR obsessed corporate types now.
To put this more charitably, the only reason to release something like this is to get some good PR, but if not carefully controlled, such a release could create more bad PR than good PR.
I don't recall which product it was, it may have been Microsoft, that needed to sanitizes their code before releasing it. There where a lot of not so nice comments about other companies and oh so much swearing. Not really the type of language a company would have their name attached to.
I think Coccinelle is a really cool tool, but I find its documentation totally incomprehensible for some reason. I've read through it multiple times, but I always end up having to find some preexisting script that does what I want, or else to blunder around trying different variations at random until something works, which is frustrating.
"The specification describes bits as combinations of 0, 1, and x, but also sometimes includes (0) and (1). I’m not sure what the parenthesized versions mean"
the answer is that the (0) and (1) are should-be-zero and should-be-one bits: if you set them wrongly then you get CONSTRAINED UNPREDICTABLE behaviour where the CPU might UNDEF, NOP, ignore that you set the bit wrongly, or set the destination register to garbage. In contrast, plain 0 and 1 are bits that have to be that way to decode to this instruction, and if you set them to something else then the decode will take you to some other instruction (or to UNDEF) instead.
This is an important ISA feature -- an instruction encoding that is wasteful of its encoding space is one that has no room for future new instructions (or which has to encode the new instructions in complicated ways to fit in whatever tiny "holes" are left in the encoding space).
The old 32-bit Arm encoding had this problem, partly because of the "all instructions are conditional" feature. Even after the clawback of the "never" condition that wasted 1/16 of the available instruction encoding space as NOPs, it was tricky to find places to put new features.
This is a result of the market and its demands, not something specific to the architecture. In desktop and server, customers demand that they can buy a new machine and install a previously released stable OS on it. That means the vendors will implement the necessary standards and cross compatibility to make that happen. In the embedded market, customers don't demand that, and so vendors have no incentive to provide it. Instead what you get is that the specific combined hardware-and-software product works and is shipped with whatever expedient set of hacks gets it out of the door. Having a new cool hardware feature that works somehow or other is more important for sales than whether that driver is upstream or there's a way to describe it in ACPI.
Where Arm is in markets that do demand compatibility (i.e. server) the standards like UEFI and ACPI are there and work. Where it's in markets like embedded, you still see the embedded profusion of different random stuff. Where other architectures are in the embedded market, you also see a wide range of different not very compatible hardware: look at riscv for an example.
It's not a completely non special character: for instance in bash it's special inside braces in the syntax where "/{,usr/}bin" expands to "/bin /usr/bin". But the need to start that syntax with the open brace will remind you about the need to escape a literal comma there if you ever want one.