> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.
To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.
However, even that weaker claim hasn’t been proven.
In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.
> In my experience, the more information is encoded in the type system, the more effort is required to change code.
I would tend to disagree. All that information encoded in the type system makes explicit what is needed in any case and is otherwise only carried informally in peoples' heads by convention. Maybe in some poorly updated doc or code comment where nobody finds it. Making it explicit and compiler-enforced is a good thing. It might feel like a burden at first, but you're otherwise just closing your eyes and ignoring what can end up important. Changed assumptions are immediately visible. Formal verification just pushes the boundary of that.
In practice it would be encoded in comments, automated tests and docs, with varying levels of success.
It’s actually similar to tests in a way: they provide additional confidence in the code, but at the same time ossify it and make some changes potentially more difficult.
Interestingly, they also make some changes easier, as long as not too many types/tests have to be adapted.
Capturing invariants in the type system is a two-edged sword.
At one end of the spectrum, the weakest type systems limit the ability of an IDE to do basic maintenance tasks (e.g. refactoring).
At the other end of the spectrum, dependent type and especially sigma types capture arbitrary properties that can be expressed in the logic. But then constructing values in such types requires providing proofs of these properties, and the code and proofs are inextricably mixed in an unmaintainable mess. This does not scale well: you cannot easily add a new proof on top of existing self-sufficient code without temporarily breaking it.
Like other engineering domains, proof engineering has tradeoffs that require expertise to navigate.
> but one reaps the benefits later thanks to more confidence when refactoring or adding code.
To be honest, I believe it makes refactoring/maintenance take longer. Sure, safer, but this is not a one-time only price.
E.g. you decide to optimize this part of the code and only return a reference or change the lifetime - this is an API-breaking change and you have to potentially recursively fix it. Meanwhile GC languages can mostly get away with a local-only change.
Don't get me wrong, in many cases this is more than worthwhile, but I would probably not choose rust for the n+1th backend crud app for this and similar reasons.
The choice of whether to use GC is completely orthogonal to that of a type system. On the contrary, being pointed at all the places that need to be recursively fixed during a refactoring is a huge saving in time and effort.
I was talking about a type system with affine types, as per the topic was Rust specifically.
I compared it to a statically typed language with a GC - where the runtime takes care of a property that Rust has to do statically, requiring more complexity.
"In my experience, the more information is encoded in the type system, the more effort is required to change code."
Have you seen large js codebases? Good luck changing anything in it, unless they are really, really well written, which is very rare. (My own js code is often a mess)
When you can change types on the fly somewhere hidden in code ... then this leads to the opposite of clarity for me. And so lots of effort required to change something in a proper way, that does not lead to more mess.
a) It’s fast to change the code, but now I have failures in some apparently unrelated part of the code base. (Javascript) and fixing that slows me down.
b) It’s slow to change the code because I have to re-encode all the relationships and semantic content in the type system (Rust), but once that’s done it will likely function as expected.
Depending on project, one or the other is preferable.
Or: I’m not going to do this refactor at all, even though it would improve the codebase, because it will be near impossible to ensure everything is correct after making so many changes.
To me, this has been one of the biggest advantages of both tests and types. They provide confidence to make changes without needing to be scared of unintended breakages.
Soon a lot of people will go out of the way and try to convince you that Rust is most productive language, functions having longer signatures than their bodies is actually a virtue, and putting .clone(), Rc<> or Arc<> everywhere to avoid borrow-checker complaints makes Rust easier and faster to write than languages that doesn't force you to do so.
Of course it is a hyperbole, but sadly not that large.
Interesting, I was also thinking of the similarities with Jehovah’s witnesses. It’s as if they somehow got into the building, were offered a job and now want to convince everyone of the merits of technical salvation.
Rust the technology is not bad, even though it is still complicated like C++, has rather poor usability (also like C++) and is vulnerable to supply-chain attacks.
But some of the people can be very irritating and the bad apples really spoil the barrel. There’s a commenter below gleefully writing that “C++ developers are spinning in their graves”. Probably slightly trolling and mentioning C++ doesn’t make sense in this kernel context, but such hostile, petty comments are not unheard of.
C++ devs don’t care what the Linux kernel’s written in.
But I did see an interesting comment from another user here which also reflects my feelings: Rust is pushed aggressively with different pressure tactics. Another comment pointed out that Rust is not about Rust programmers writing more Rust, but “Just like a religion it is about what other people should do.”.
I’ve been reading about this Rust-in-the-kernel topic since the beginning, without getting involved. One thing that struck me is the obvious militant approach of the rustafarians, criticizing existing maintainers (particularly Ts’o and other objectors), implying they’re preventing progress or out of touch.
The story feels more like a hostile takeover attempt than technology.
I also think that many C or C++ programmers don’t bother posting in this topics, so they’re at least partially echo chambers.
That's how it feels to me. There are crucial issues, namely that there is no spec and there is only one implementation. I don't know why Linus is ok with this. I'd be fine with it if those issues were resolved, but they aren't.
> There are crucial issues, namely that there is no spec and there is only one implementation. I don't know why Linus is ok with this.
I can try to provide a (weak) steelman argument:
Strictly speaking, neither the lack of a spec nor a single implementation have been blockers for Linux's use of stuff, either now or in the past. Non-standard GCC extensions and flags aren't exactly rare in the kernel, and Linus has been perfectly fine with the single non-standard implementation of those. Linus has also stated in the past (paraphrasing) that what works in practice is more important than what standards dictate [0]. Perhaps Linus feels that what Rust does in practice is good enough, especially given its (currently) limited role in the kernel.
Granted, not having a spec for individual flags/features is not equivalent to not having a spec for the language, so it's not the strongest argument as I said. I do think there's a point nestled in there though - perhaps what happens on the ground is the most important factor. It probably helps that there is work being done on both the spec and multiple implementation fronts.
Business and Enterprise plans have a no-training-on-your-data clause.
I’m not sure personal Claude has that. My account has the typical bullshit verbiage with opt-outs where nobody can really know whether they’re enforceable.
Using a personal account is akin to sharing the company code and could get one in serious trouble IMO.
You can opt-out of having your code being trained on. When Claude Code first came out Anthropic wasn't using CC sessions for training. They started training on it starting from Claude Code 2 that came out with Sonnet 4.5. User is asked on first use whether to opt-in or out of training.
They need to have very strict security clearance requirements and maintain them throughout the life of the project or their tenure. People don’t realize this isn’t some little embedded app you throw on an ESP32.
You’ll be interviewed, your family, your neighbors, your school teachers, your past bosses, your cousin once removed, your sheriff, your past lovers, and even your old childhood friends. Your life goes under a microscope.
I went through the TS positive vetting process (for signals intelligence, not writing software for fighter jets, but the process is presumably the same).
If I were back on the job market, I’d be demanding a big premium to go through it again. It’s very intrusive, puts significant limitations on where you can go, and adds significant job uncertainty (since your job is now tied to your clearance).
Not to mention embedded software is often half the pay of a startup and defense software often isn't work from home. Forget asking what languages they can hire for. They are relying on the work being interesting to compensate for dramatically less pay and substantially less pleasant working conditions. Factor in some portion of the workforce has ethical concerns working in the sector and you can see they will get three sorts of employees. Those who couldn't get a job elsewhere, those who want something cool on their resume, and those who love the domain. And they will lose the middle category right around the time they become productive members of the team because it was always just a stepping stone.
Yes but like a certification, that clearance is yours, not the companies. You take it with you. It lasts a good while. There are plenty of government companies that would love you if you had one. Northrop, Lockheed, Boeing, etc.
An Engineering degree and a TS is basically a guaranteed job. They might not be the flashiest FAANG jobs, but it is job security. In this downturn where people talk about being unable to find jobs for years in big cities, I look around my local area and Lockheed, BAE, Booze Allen, etc they have openings.
My issue is you end up dealing with dopes who don't want to learn, just want to milk the money and the job security, and actively fight you when you try to make things better. Institutionalized.
While getting lunch at an Amazon tech day a couple of years ago, I overheard somebody talking about how easy it was to place somebody with a clearance and AWS certifications. Now, this was Washington, DC, but I doubt it's the only area where that's true.
And yet my experience looking at the deluge of clearance-required dev jobs from defense startups in the past couple of years is that there is absolutely no premium at all for clearance-required positions.
I was once interviewed by the FBI due to a buddy applying for security clearance. One thing they asked was, "have you every known XXX to drink excessively", to which I replied "we were fraternity brothers together, so while we did often drink a lot, it needs to be viewed in context",
The exact opposite of what you suggest already happened: Ada was mandated and then the mandate was revoked.
It’s generally a bad idea to be the only customer of a specific product, because it increases costs.
> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++
What’s the problem with the F35 and combat readiness? Many EU countries are falling over each-other to buy it.
Maybe the EU shouldn’t have transformed themselves into US vassals then.
Nobody respects weakness, not even an ally. Ironically showing a spine and decoupling from the US on some topics would have hurt more short term, but would have been healthier in the long term.
>Maybe the EU shouldn’t have transformed themselves into US vassals then.
I share the same opinion. If you're (on paper) the biggest economic block in the world, but you can be so easily bullied, then you've already failed >20 years ago.
But I don't think it was bullying, but the other way around. EU countries were just buying favoritism for US military protection, because it was still way cheaper than ripping the bandaid and building its own domestic military industry of similar power and scale.
Most defense spending uses the same motivation. You're not seeking to buying the best or cheapest hardware, you seek to buy powerful friends.
Much of existing European F-35 fleet predates Trump's first term. In fact now quite the opposite happens: other options being eyed from reliable partners, even if technically inferior.
The pilots might have reassessed after Pakistan seemed to have shot three of them down from over 200km range. Intel failure blamed but likely many factors of which some presumably may be attributed to the planes.
I poorly worded it. Rafales allegedly shot down. After that happened, perhaps the pilots wanting them over F35s might have a different opinion. F35s might be harder to get a lock on at that distance and might have better situational awareness capabilities.
> What’s the problem with the F35 and combat readiness?
For example, the UK would like to use its own air-to-ground missile (the spear missile) with its own F-35 jets, but it's held back by Lockheed Martin's Block 4 software update delays.
Comparing accurate communication with magic is nonsense.
Both in Europe and the US, the government screwed up badly both mask strategic stockpiles and procurement. Therefore, the official message was that “masks don’t work”.
After they were finally able to procure masks, they magically started working.
That is the real magic, not demanding competence for people whose jobs were literally not fucking this up.
Meanwhile China and South Korea were producing and using masks as was normal.
The second magical part is the gaslighting about the performance of institutions tasked with pandemic preparation and about the exaggerated and incompetent government measures like fining people for going outside, forbidding people from going to work without being vaccinated or mandatorily tested each day, etc.
Vaccine safety issues were consistently downplayed by the media and in internet forums like this one. In the end, the EU-CDC published clear information on the safety of the AstraZeneca vaccine and it was much worse than for mRNA vaccines. One mRNA vaccine was worse than the other.
“Doesn’t kill you” is the absolute bare minimum and a very low bar. Because the vaccines were so rushed, it’s still reassuring, but not at all a testament to the safety of mRNA vaccines.
The more interesting studies will be about non-lethal adverse reactions. Changes to menstruation, heart problems, lymph node swelling to name just a few.
Fortunately doctors and medical organizations usually take these matters seriously, unlike the average techbro. A good example is how the vaccination recommendations were changed to avoid Moderna for young men to reduce the risk of heart problems.
They rival Rust in the same way that golang and zig do: they handle more and more memory-safety bugs to the point that the delta to Rust’s additional memory-safety benefits doesn’t justify the cost of using Rust any more.
Zig does not approach, and does not claim to approach, Rust's level of safety. These are completely different ballparks, and Zig would have to pivot monumentally for that to happen. Zig's selling point is about being a lean-and-mean C competitor, not a Rust replacement.
Golang is a different thing altogether (garbage collected), but they still somehow managed to have safety issues.
To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.
However, even that weaker claim hasn’t been proven.
In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.
reply