Other languages debate for years, because they have a customer base with important applications into production that don't find funny that their code is broken with experiments.
Zig is years away to become industry relevant, if at all, of course they can experiment all they like.
It's hard to imagine Zig ever becoming stable and conservative. Even at 10 years old, it's still as beta as ever. At some point the churn becomes part of the culture.
Not a complaint, just an observation. I like that they are trying new things.
I wouldn't be so sure about that. I do think there's a bit of scope creep, especially with the LLVM replacement stuff, but I don't think it's bad enough for the language to never come out. Most notable languages have at least one large corporate sponsor behind them, Zig doesn't.
I’m a casual user and the 0.16 changes scare me. I tried multiple attempts now, even with full LLM support to upgrade and the result is just a) painful and b) not very good. I have high doubts that the current IO system of 0.16 will make it for another release given the consequences of their choices.
1. if you're a casual user (ie you don't follow the development) don't try incomplete APIs that not even the creators of fully know how they are supposed to work (because they're still tinkering with them) also you can't expect docs until the design is somewhat finalized (which is not yet, fyi)
2. llms don't help when trying to make sense of the above (a feature that is not complete, that has no docs other than some hints in commit messages, that changes every other commit), reserve llms for when things are stable and well documented, otherwise they will just confuse you further.
If you want to try new features before they land in a tagged release, you must engage with the development process at the very least.
> if you're a casual user (ie you don't follow the development) don't try incomplete APIs that not even the creators of fully know how they are supposed to work
Is the completeness of each API formally documented anywhere? Maybe I missed something but it doesn't seem like it is, in which case the only way to know would be to follow what's happening behind the scenes.
Zig cuts releases. This API is not on a release of Zig yet. It's only available through nightly builds of master. "Casual users" should stick to releases if they don't want to deal with incomplete APIs.
> if you're a casual user (ie you don't follow the development) don't try incomplete APIs that not even the creators of fully know how they are supposed to work
From what I can tell pretty much everything can be broken at any point in time. So really the only actual advise here is not to use the language at all which is not reasonable.
> llms don't help when trying to make sense of the above
That has not been my experience. LLMs were what enabled me to upgrade to 0.16 experimentally at all.
> If you want to try new features before they land in a tagged release, you must engage with the development process at the very least.
No, that is unnecessary gatekeeping. 0.16 will become stable at one point and I don't want to wait until then to figure out what will happen. That's not how I used Rust when it was early (I always also tried nightlies) and that line of thinking just generally does not make any sense to me.
The reality is that Zig has very little desire to stabilize at the moment.
The flipside of that is that the incomplete API should be in a separate branch until it is ready to be included in a release, so that people can opt in instead of having to keep in mind what parts of the API they aren't supposed to be using. It doesn't seem like you expect the changes to be finalised in time for 0.16.
> Instead of debating for years (like other languages), zig just tries things out.
Many other languages do try things out, they just do it in a separate official channel from the stable release or unofficial extensions. Depending on how many users the language has, that may still be more implementation experience than Zig making all devs try it.
I suspect the actual difference is the final decision making process rather than the trial process. In C++, language extensions are tried out first (implementation experience is a requirement for standard acceptance) but committee debates drag on for years. Whereas Python also requires trial periods outside the stable language but decisions are made much more quickly (even now that there's a steering rather than single BDFL).
This is a great point, and it's actually something I really enjoy that the JVM and Java do nowadays by namespacing the new experimental APIs that you test from release to release and then it's stabilized like that, and becomes broadly available.
> Instead of debating for years (like other languages), zig just tries things out.
Good
> Worst case you can always rollback changes.
No, you cannot. People will leave in masses. In perl they announced experiments with a mandatory use experimental :feature. You couldnt publish modules with those, or else you are at risk.
This made perl exciting and fresh. Python on the other hand constantly broke API's, and had to invent package version locks and "safe" venv's. Which became unsafe of course.
Languages and stdlib are not playgrounds. We see what came out of it with C and C++ with horrible mistakes getting standardized.
I thought it was stable enough initially but they completely broke fuzz testing feature and didn’t fix it.
Also build system API and some other APIs change and it is super annoying.
Find it much better to use c23 with _BitInt integers and some macros and context passing for error handling.
Also some things like stack traces were broken in small ways in zig. It would report wrong lines in stack traces when compiling with optimizations. Also wasn’t able to cleanly collect stack traces into strings in production build.
It is understandable that breaking APIs is good for development but in most cases new API isn’t that good anyway.
And recently saw they even moved the time/Instant API to some other place too. This kind of thing is just super annoying with seemingly no benefit. Could have left the same API there and re-used it from somewhere else. But no, have to make it “perfect”
It sounds like you expected 1.0 stability from a language that isn't 1.0.
> I thought it was stable enough initially but they completely broke fuzz testing feature and didn’t fix it.
From the 0.14.0 release notes:
> Zig 0.14.0 ships with an integrated fuzzer. It is alpha quality status, which means that using it requires participating in the development process.
How could we possibly have been more explicit?
Fuzzing will be a major component of Zig's testing strategy in the long term, but we clearly haven't had the time to get it into shape yet. But we also didn't claim to have done!
> Also some things like stack traces were broken in small ways in zig. It would report wrong lines in stack traces when compiling with optimizations. Also wasn’t able to cleanly collect stack traces into strings in production build.
I mean, to be fair, most compiled languages can't give you 100% accurate source-level stack traces in release builds. But that aside, we have actually invested quite a lot of effort into std.debug in the 0.16.0 release cycle, and you should now get significantly better and more reliable stack traces on all supported platforms. If you encounter a case where you don't, file a bug.
> And recently saw they even moved the time/Instant API to some other place too. This kind of thing is just super annoying with seemingly no benefit. Could have left the same API there and re-used it from somewhere else. But no, have to make it “perfect”
I acknowledge that API churn can be annoying, but it would be weird not to aim for perfection prior to 1.0.
This is my favourite way to iterate, but the hard lesson is at some point after trying a bunch of things comes the Big Cleanup (tm). Is that a potential issue for this with Zig?
> You mean like how Rust tried green threads pre-1.0? Rust gave up this one up because it made runtime too unwieldy for embedded devices.
The idea with making std.Io an interface is that we're not forcing you into using green threads - or OS threads for that matter. You can (and should) bring your own std.Io implementation for embedded targets if you need standard I/O.
Ok. But if your program assumes green threads and spawn like two million of them on target that doesn't support them, then what?
The nice thing about async is that it tells you threads are cheap to spawn. By making everything colourless you implicitly assume everything is green thread.
Rust has stable vs nightly. Nightly tries things and makes no guarantees about future compatibilities. If you need code that builds forever, you stay on stable. There's no reason Zig couldn't have the same.
Except that many component manufacturers release their efi capsules signed with Microsoft PKI. So no, you can't fully remove them if you want to verify updates.
While "So no, you can't fully remove them if you want to verify updates" is a valid point, it's also an answer to a different question than the one asked.
If you're interested in the topic there's great YouTube channel that demonstrates such attacks IRL together with full tutorials. Below are 2 satellite related videos:
Not the best name for the article. My first guess was version changes, or software being added/removed from repo. Turns out this is about source code modification.
As a native (British) English speaker, I was also unclear until reading the article.
Personally, I believe s/change/modify would make more sense, but that's just my opinion.
That aside, I'm a big fan of Debian, it has always "felt" quieter as a distro to me compared to others, which is something I care greatly about; and it's great to see that removing of calling home is a core principle.
All the more reason to have a more catchy/understandable title, because I believe the information in those short and sweet bullet points are quite impactful.
Patching out privacy issues isn't in Debian Policy, its just part of the culture of Debian, but there are still unfixed/unfound issues too, it is best to run opensnitch to mitigate some of those problems.
> it is best to run opensnitch to mitigate some of those problems
Opensnitch is a nice recommendation for someone concerned about protecting their workstation(s); for me, I'm more concerned about the tens of VMs and containers running hundreds of pieces of software that are always-on in my Homelab, a privacy conscious OS is a good foundation, and there are many more layers that I won't go into unsolicited.
Homelabs are usually running software not from a distro too, so potentially more privacy issues there too. Firewalling outgoing networking, along with a filtering SOCKS proxy like privoxy might be a good start.
Me too. I was hoping for an explanation of why the software I have got used to and works very well and isn't broken keeps being removed from Debian in the next version because it is "unmaintained".
A problem I encountered while writing custom stdlib, is that certain language features expect stdlib to be there.
For example, <=> operator assumes, that std::partial_ordering exists. Kinda lame. In the newer C++ standards, more and more features are unusable without stdlib (or at least std namespace).
At least you have the chance to implement your own std::partial_ordering if necessary; in most languages those kind of features would be built into the compiler.
Haskell solves this nicely: your own operators just shadow the built-in operators. (And you can opt to not import the built-in operators, and only use your own. Just like you can opt not to import printf in C.)
that's not the issue, the problem is the operator is required by the language to return a type from the stdlib, so you have to pull in the stdlib to get that type
In Haskell, when you define your own operators (including those that have the same name as those already defined in the standard library), you can specify your own types.
There's basically no operators defined 'by the language' in Haskell, they are all defined in the standard library.
(Of course, the standard library behaves as-if it defines eg (+) on integers in terms of some intrinsic introduced by the compiler.)
Sometimes standard library types defined in terms of compiler-builtins like `typedef decltype(nullptr) nullptr_t` but that doesn't always make sense. E.g. for operator<=> the only alternative would be for the compiler to define std::partial_ordering internally but what is gained by that?
Well, just the idea that you can use the entire core language without `#include`'ing any headers or depending on any standard-library stuff, is seen as a benefit by some people (in which I include myself). C++ inherited from C a pretty strong distinction between "language" and "library". This distinction is relatively alien to, say, Python or JavaScript, but it's pretty fundamental to C that the compiler knows how to do a bunch of stuff and then the library is built _on top of_ the core language, rather than alongside it holding its hand the whole way.
using strong_ordering = decltype(1 <=> 2);
using partial_ordering = decltype(1. <=> 2.);
But it remains impossible, AFAIK, to define `weak_ordering` from within the core language. Maybe this is where someone will prove me wrong!
As of C++14 it's even possible to define the type `initializer_list` using only core-language constructs:
template<class T> T dv();
template<class T> auto ilist() { auto il = { dv<T>(), dv<T>() }; return il; }
template<class T> using initializer_list = decltype(ilist<T>());
(But you aren't allowed to do these things without including <compare> resp. <initializer_list> first, because the Standard says so.)
Note that even for C the dependency from compiler to standard library exists in practice because optimizing compilers will treat some standard library functions like memcpy specially by default and either convert calls to them into optimized inlined code, generate calls to them from core language constructs, or otherwise make assumptions about them matching the standard library specification. And beyond that you need compiler support libraries for things like operations missing from the target architecture or stack probes required on some platforms and various other language and/or compiler features.
But for all of these (including the result types of operator<=>) you can define your own version so it's a rather weak dependency.
> C++ inherited from C a pretty strong distinction between "language" and "library".
This is long gone in ANSI/ISO C, as there are language features that require library support, like floating point emulation, followed by threading, memory allocation (tricky without using Assembly), among others.
Which is why freestanding subset exists, or one has to otherwise call into OS APIs as alternative to standard library, like it happens on non-UNIX/POSIX OSes.
Both AMD and Google note, that Zen[1-4] are affected, but what changed about Zen5? According to the timeline, it released before Google notified AMD [1].
Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.
Honestly I don't get why people are hating this response so much.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
To be precise: you don't need to be in the sudo group, but in the lpadmin group. I'm not familiar with how Ubuntu groups are set up, but I guess it's likely that lpadmin is only granted to administrators by default.
That said, I'm guessing people aren't expecting lpadmin to mean a full privilege escalation to root.
There are two bugs here: one in cups, which allows it to chmod anything 777 (doesn't properly check for symlinks, or for the failure of bind), and one in wpa_supplicant, which lets it load arbitrary .so files as root. However, I suspect that even if these bugs are fixed, having access to lpadmin will still be a powerful enough primitive to escalate to root given the rather sizable attack surface of cups.
It became crystal clear that cups is a can of worms, and it would be prudent to completely replace it with with a new solution built from the ground up, ideally using modern tools and standards.
or just sandbox cups. There's no reason cups needs to write anything beyond its "configuration" and its "print spool". And hence, shouldn't have access to anything beyond what it needs to configure itself and print.
thing like cups should be easy to sandbox, especially if we allow dbus like APIs as a means to cross sandbox boundaries (i.e. RPC mechanism).
and by sandbox, I dont mean simply use apparmor type rules (though that can work), but a cups that lives within its own file system and nothing else is even visible.
i.e. programs will always be buggy, even if we get rid of all language oriented bugs, there will still be logic bugs that will result in security holes. We just need to make it easy to isolate programs (and services) into their own sandboxes while retaining the ability for them to interact (as otherwise, lose much of the value of modern systems).
In practice, I would argue, a lot of modern systems do this already (ala ios/android). The apps run sandboxed and only have restricted abilities to interact with each other.
That's sort-of the direction they're going with CUPS 3. The 'local server', which is what most people will need, runs as a normal user, not root, doesn't listen on the internet, and talks only the IPP everywhere protocol. For supporting legacy printers, there will be separate sandboxed 'printer applications' which read in IPP Everywhere, run the driver code, and communicate with the backend printer using an appropriate protocol.
For enterprise users there will be a separate 'sharing server'.
I'd prefer they make it the default to not install it. I don't need to print from Linux. I don't print from Windows nor MacOS much either. Less than once a year. But I particularly don't print from Linux. I suspect that true for most people. cups shouldn't be a default install IMO
How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...
> How many percent is "most" people? What about enterprise users with complex setups/requirements, will they be supported or out-of-luck? Typically you'll have print servers with centralized authentication, possibly logging/auditing/billing, any this might depend on "the" component they'll leave out in the new product because, well, most people don't care about it...
But the old, complex cups doesn't go away if a new, sandboxed version is developed, so the people who want the complexities can evaluate whether the security trade-off is worth it, and use it anyway if so.
What if you don't need cups because you don't print anything?
Just sudo apt remove cups right?
No, because cups is a dependency of the entire graphical subsystem, just removing cups also removes everything from the Nautilus file manager to Firefox to ubuntu-desktop itself.
This might be wrong, but based on my own experience of Ubuntu effectively uninstalling itself when I tried to remove a single package.
I think most of the default software gets installed as one large package group, rather than as individual pieces of software. Only the group is marked as manually installed, but the individual programs pulled in by that group are marked as automatically installed. If you try to apt install something you already have as part of the default distro software, you'll usually see a message saying something like "marked as manually installed."
When you go to uninstall one program from the group, that one program is uninstalled as requested, but the group itself has to be marked as uninstalled, since you've removed one of that group's "dependencies" and thus can no longer satisfy that group's installation requirements. You now have a load of software that was automatically installed as dependencies of another package, but are no longer dependencies of any manually-installed packages. The next time you run apt autoremove, it'll remove all of those automatically-installed components and leave you with an almost bare system.
To expand on this: if the user is in the sudo group, they have explicit permission to execute anything they like as root. If someone wants a user to not be able to do this, they don't put that user in the sudo group. As far as I can tell from the write-up, if you remove a user from the sudo group because you don't want them to have that privilege then this "exploit" won't work.
The bugs found look correct and have security implications, but what is demonstrated is therefore not really "root privilege escalation" since it applies only to users who already have that privilege.
> They can execute anything they like as root... by entering their password.
If it has control of your user account, then it can just arrange to wrap your shell prompt and wait for you to sudo something else. The sudo password prompt in its default arrangement doesn't really provide much security there and isn't expected to.
On a properly configured server you'll be waiting forever, because the users actually running the applications on that server aren't the same users who have privileges to make changes to the system or have access to stuff like sudo. So if you take over the nginx/postgres/whatever user, you're not really going to get anywhere.
On the other hand you probably don't need to. Those users already expose all the juicy data on the server. You don't gain much from obtaining root anyways, except better persistence.
This attack might be more interesting when chained with some other exploit that gains access to a users system via their e-mail client or browser. In other words nice if you're NSO Group making exploits for targeting individuals, but not that useful if you're trying to make a botnet.
That's not really relevant nowadays. Most attacks are done indiscriminately and en-masse, so an attacker wouldn't have to wait very long in practice.
Only in "advanced persistent thread" territory is your point really relevant, but the attack I describe is much more widely applicable. Having to wait a while is therefore not in any way a mitigation. In practice then, one cannot assume any security from sudo requiring a password.
Somewhat tangentially, I will say that Touch ID-based sudo is a real upgrade over password sudo. It still gives you that extra moment to reflect on whether you really want to run that command (unlike passwordless sudo), without being burdensome.
If that is your attitude, why bother with the sudo group at all? Just run as root.
(For what it's worth, I think most people would not lose much security from running as root, and the obsession with sudo is so much security theater, for exactly this sort of reason.)
User accounts and sudo does auditing of who is doing what. They’re are other ways, sure, but checking auth.log is the simplest.
And while lpadmin users can escalate, I’m more interested in escalations from services like web servers or whatever, running as low priv users. I use sudo to allow scripts running as my web server to run specific limited privileged programs as a simple layer of defence.
Use sudo instead of root. Change ssh port. Disable ssh passwords. Use some port knocking schemes or fail2ban. Close all ports with firewall. Some of those requirements might come from some stupid complicance rules, but usually they just come from rumors and lack of clear understanding of threat model. And sometimes they might even slightly reduce security gurantees (like changing ssh port to 10022, there's actually a security reason why only root can bind to <1024 port and by changing it to higher port you lose some tiny protection that unix greybeards invented for you).
I'm not saying that all those measures are useless, in fact I often change ssh port myself. But I'm doing that purely to reduce log spam, not because that's necessary for security. Configuring firewall might be necessary in rare cases when brain-dead software must listen on 0.0.0.0 but must not be available outside. But that's not something given and should be decided in case-by-case basis, rather than applied blindly.
Honestly, sudo’s value is really sanity, not security.
The first time you use certain flavors of sudo, you get a nice little message which reminds you why sudo exists:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Realistically, sudo exists to remind a user of these points. That is: by needing to type “sudo” before a command, you’re being reminded to pay closer attention that you’re not violating another user’s privacy or doing something that’s going to break your system.
Sudo is so commonly used especially on developer machines that I think it is used reflexively without any thought at at all.
It should not be, but that's a different issue. It amazes me the amount of open-source projects that want to be installed with "sudo" when there is no reason they should not be able to be built and used entirely from within the developer's home directory.
I know more than one person who starts a shell session with "sudo -i" and then just works as root because typing "sudo" all the time is an annoyance.
I wonder if this comes from the how some developers view ops knowledge and tasks as merely ancillary to their interests and work.
For me, Linux was a hobby prior to and separately from programming. In the tutorials and documentation I read, every command was explained in detail and it was emphasized to me that I should never run a command that I don't fully understand. All instructions to elevate privileges were accompanied with advice about being careful running commands as root because root's privileges are especially dangerous. I was interested in those warnings, and took them seriously, because I wanted to master what I was learning. What I was learning, though, explicitly included Linux/Unix norms like security 'best practices'.
Developer documentation doesn't usually concern itself with Linux/Unix norms the way that tutorials for Linux hobbyists and novice sysadmins do. At the same time, the developers reading it might be perfectly dedicated to mastery, but just not really see what is considered proper usage by sysadmins (let alone the considerations that inform the boundaries of such propriety) to on-topic for what they're studying/exploring/playing with. Diving into those details might not be 'patt of the fun' for them.
What such a developer learns about sudo is mostly going to come from shallow pattern recognition: sudo is a button to slap when something doesn't work the first time, and maybe it has something to do with permissions.
But I think that comes from the mode of engagement, especially at the time of learning sudo, more than the mere frequency of use. I use sudo several times every day (including sometimes interactive sessions like you mention, with -i or -s), but I am careful to limit my usage to cases where it's really required. I'm not perfect about that; occasionally I run `sudo du` when `du` would suffice because I pulled something out of my shell history when I happened to be running it from / or whatever. But I certainly don't run it reflexively or thoughtlessly.
If you want to manage VMs, then you're probably using terraform + provider. However, SDN (Software Defined Networking) is not yet supported [1], which makes any kind of deployment with network separation not feasible (using IAC only).
IMO best APIs and designs are those that are battle tested by end users, not won in an argument war during committee meetings.
This makes zig unique. It's fun to use and it stays fresh.
You can always just stay on older version of zig. But if you choose to update to newer version, you get new tools to make your code tidier/faster.
reply