I probably read this the wrong way around. I thought the question was about how the huge Chinese unions are involved in the nation's technological ascendancy. Which I really would like to learn more about!
Reading the history of shipping, unions played a huge role in the slower transition from labor-intensive general shipping to containerized shipping.
multiple steps to get goods near a ship in a warehouse, prepared for the ship, manually loaded and packed, and more nonsense became just: drive the containers to the dock and load them when the ship showed up.
It's a factor but it's also true of non-union jobs. Politicians are against any short-term pain because they're afraid they won't be elected. But it's just a part of the culture, US voters don't like it either (even if they're not the ones facing potential job loss)
It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.
That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.
Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.
C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.
As long as LLVM (C++ but still) is not rewritten is rust [0] , I don't buy it. C is like JavaScript, it's not perfect, is everywhere and you cannot replace it without a lot of effort and bugfix/regression tests.
If I take for example sqlite (25 years old [3]) there are already 2 rewrites in rust [1] and [2], and each one has its bugs.
And as an end user I'm more enclined to trust the battle-tested original for my prod than its copies. As long as I don't have the proof the rewrite is at least as good as the original, I'll stay with the original. Simple equals more maintainable. That's also why sqlite maintainers won't rewrite it in any other language [4].
The trade of rust is "you can lose features and have unexpected bugs like any other language, but don't worry they will be memory safe bugs".
I'm not saying rust is bad and you should not rewrite anything in it, but IMHO rust programmers tend to overestimate the quality of the features they deliver [5] or something along these lines.
systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.
Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.
> Just found out that the systemd developers don't understand DNS (or IPv6).
Just according to Github, systemd has over 2,300 contributors. Which ones are you referring to?
And more to the point, what is this supposed to mean? Did you encounter a bug or something? DNS on Linux is sort of famously a tire fire, see for example https://tailscale.com/blog/sisyphean-dns-client-linux ... IPv6 networking is also famously difficult on Linux, with many users still refusing to even leave it enabled, frustratingly for those of us who care about IPv6.
Systemd-resolved invents DNS records (not really something you would like to see, makes debugging DNS issues a nightmare). But worse, it populates those DNS records with IPv6 link local addresses, which really have no place in DNS.
Then, when after a nice debugging session why your application behaves so strangely, all the data in DNS is correct, why doesn't it work, you find that this issue has been reported before and was rejected as won't fix, works as intended.
Hm, but systemd-resolved mainly doesn't provide DNS services, it provides _name resolution_. Names can be resolved using more sources than just DNS, some of which do support link-locals properly, so it's normal for getaddrinfo() or the other standard name resolution functions to return addresses that aren't in DNS.
i.e. it's not inventing DNS records, because the things returned by getaddrinfo() aren't (exclusively) DNS records.
The debug tool for this is `getent ahosts`. `dig` is certainly useful, but it makes direct DNS queries rather than going via the system's name resolution setup, so it can't tell you what your programs are seeing.
systemd-resolved responds on port 53. It inserts itself in /etc/resolv.conf as the DNS resolver that is to be used by DNS stub resolvers.
It can do whatever it likes as longs as it follows DNS RFCs when replying to DNS requests.
Redefining recursive DNS resolution as general 'name resolution' is indeed exactly the kind of horror I expect from the systemd project. If systemd-resolved wants to do general name resolution, then just take a different transport protocol (dbus for example) and leave DNS alone.
It's not from systemd though. glibc's NSS stuff has been around since... 1996?, and it had support for lookups over NIS in the same year, so getaddrinfo() (or rather gethostbyname(), since this predates getaddrinfo()!) have never just been DNS.
systemd-resolved normally does use a separate protocol, specifically an NSS plugin (see /etc/nsswitch.conf). The DNS server part is mostly only there as a fallback/compatibility hack for software that tries to implement its own name resolution by reading /etc/hosts and /etc/resolv.conf and doing DNS queries.
I suppose "the DNS compatibility hack should follow DNS RFCs" is a reasonable argument... but applications normally go via the NSS plugin anyway, not via that fallback, so it probably wouldn't have helped you much.
I'm not sure what you are talking about. Our software has a stub resolver that is not the one in glibc. It directly issues DNS requests without going through /etc/nsswitch.conf.
It would have been fine if it was getaddrinfo (and it was done properly) because getaddrinfo gives back a socket and that can add the scope ID to the IPv6 link local address. In DNS there is no scope ID, so it will never work in Linux (it would work on Windows, but that's a different story).
If you don't like those additional name resolution methods, then turn them off. Resolved gives you full control over that, usually on a per-interface basis.
If you don't like that systemd is broken, then you can turn it off. Yes, that's why people are avoiding systemd. Not so much that the software has bugs, but the attitude of the community.
It's not broken - it's a tradeoff. systemd-resolved is an optional component of systemd. It's not a part of the core. If you don't like the choices it took, you can use another resolver - there are plenty.
I don't think many people are avoiding systemd now - but those who do tend to do it because it non-optionally replaces so much of the system. OP is pointing out that's not the case of systemd-resolved.
It's not a trade-off. Use of /etc/resolv.conf and port 53 is defined by historical use and by a large number of IETF RFC.
When you violate those, it is broken.
That's why systemd has such a bad reputation. Systemd almost always breaks existing use in unexpected ways. And in the case of DNS, it is a clearly defined protocol, which systemd-resolved breaks. Which you claim is a 'tradeoff'.
When a project ships an optional component that is broken, it is still a broken component.
The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
Yes, violating historical precedent is part of the tradeoff - I see no contradiction. Are you able to identify the positive benefits offered by this approach? If not, we're not really "engineering" so to speak. Just picking favorites.
> The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
I'm well aware - my day job is writing networking software.
That's the main problem with systemd: replacing services that don't need replacing and doing a bad job of it. Its DNS resolver is particularly infamous for its problems.
Sure, those authors chose that license because they did not really particularly care for the politics of licenses and chose the most common one in the Rust ecosystem, which is MIT/Apache 2.
If folks want more Rust projects under licenses they prefer, they should start those projects.
> If folks want more Rust projects under licenses they prefer, they should start those projects.
100% true, but also hides a powerful fact: Our choices aren't limited to doing it ourselves. Listening to others and discussing how to do things as a group is the essence of community seeking long-term stability abd fairness. It'a how we got to the special place we are now.
Not everyone can or should start their own open source project. Maybe theyre already doing another one. Maybe they don't know how to code. The viewpoint of others/users/customers is valid and should not only be listened to but asked for.
I agree that throwing away battle tested code is wasteful and often not required. Most people are not of the mindset of just throwing things away but there is a drive to make things better. There are some absolute monoliths such as the Linux kernel that will likely never break free of its C shackles and thats completely okay and acceptable to me
It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.
The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.
Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.
Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.
Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?
And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".
Lumen Field in Seattle just installed some Amazon Just Walk Out vendors this year. I'm happy to report you don't need to be logged into Amazon or have an app. I double clicked my phone to swipe my Apple Pay before I walked in, grabbed a beer and walked out.
The big issue I have with this experience is that you don't get a clear charge price before you leave. So you have to check a page either some minutes or hours later and hope that the total is correct. Like the article said, I don't love the idea of being charged for 3 overpriced bottles of water when I only took two. I'd rather just settle my transactions in the moment than try to remember what my total was and dispute things later from memory on the occasional times it's wrong.
> you don't get a clear charge price before you leave. So you have to check a page either some minutes or hours later and hope that the total is correct
Oh, I’m very much sure this is a feature. Because, you see, only some percentage of people will actually look at the receipt. Some fraction of them will notice the error. Some fraction of those people will actually be motivated to spend their time on the phone clawing back an extra $8 water. The complement of that small percentage is a lucrative chance to sell the same overpriced water more than once.
Amazon had used roughly 1,000 humans in India, according to some news reports, to help monitor accurate checkouts. The company told CNN it’s “reducing the number of human reviews” while developing the “Just Walk Out” technology. Amazon said besides data associates’ main role in working on the underlying technology, they also “validate a small minority” of shopping visits.
At the very least the is how it should be done. Having to download and install an app, then login, then connect payment info, etc... Sounds like such a pain I wouldn't even bother.
attention is finite. land is finite. resources are finite. access to qualified doctors is finite. access to food is finite (something we'll realize at the next great famine). access to water is finite. your time living on earth is finite (and shorter the less money you have).
we operate at a scale where that matters nowadays.
I saw a comment in another thread that the AMA recognizes the problem of a deficit in new MD's. According to the comment, congress provides funding for MD residents, and that is the real bottleneck.
This. You usually get one and only one chance with new people. People hate rejection and they're not going to keep asking.
So a pro tip: if you're starting a new job or something and want to integrate into the social circles, be prepared to drop everything for those first few invitations. The first one is basically mandatory.
I am sure Carmack himself encourages debates and discussions. Lionizing one person can't be expected of every employee (unless that person is also the founder or the company is tiny).
I was one month into my first full-time job, when I've (unknowingly of his rank) challenged the CTO in a technical discussion - in a public email exchange. Regardless of the outcome - I've been treated like an equal. This one short exchange has influenced not only the rest of my career, but my entire worldview.
I mean to some extent sure. But also you need to respect expertise and experience. So much of what we do is subjective, and neither side going to have hard data to support their arguments.
If it comes down to someone saying “I’ve been doing this for 30 years, I’ve shipped something very similar 5 times, and we ran into a problem with x each time”. Unless you have similar counter experience, you should probably just listen.
What happens in tech is you get a very specific kind of junior who wants to have HN comment arguments at work constantly and needs you to prove every single thing to them. I don’t know man it’s a style guide. There’s not going to be hard quantitative evidence to support why we said you shouldn’t reach for macros first.
Ugh. Can we as an industry stop blowing people up like this? It’s a clear sign that the community is filled with people with very little experience.
I remember this guy wanted $20 million to build AGI a year ago (did he get that money?), and people here thought he would go into isolation for a few weeks and come out with AGI because he made some games like that. It’s just embarrassing as a community.
Carmack's best work was between Keen and Quake, and it was mostly optimizations that pushed the limit of what PC graphics could do. He's always been too in-the-weeds to have a C-level title.
He is just a guy who can write game code well and has good PR skills online. I wouldn’t give him a cent if he promised anything in the AI field, no matter how much a bunch of online people gas him up.
He's a guy that knows a lot of math and how to turn that math into code. I don't know if he'd be able to come up with some brand new paradigm for AI but I'd want him on my team and I'd listen to what he has to say.
AI math is not game code math. There are plenty of actual experts in AI who know “how to turn math into code” with years of experience. I would not want this guy, his ego, his lack of social skills, his online fanbase, and his lack of experience in AI to be anywhere near my AI team.
I guess the general stuff is movies, Netflix shows, music, your last short weekend trip, and pretty much everyone has their own personal non work thing, usually attached to a club or group (hiking, photography, whatever).
I guess in that last category sports are commonplace, but it’s more “I’m training for a marathon next month” or “you should come bouldering sometime” rather than following professional sports on tv.
This sounds like it’s particular to your friend group rather than some coarse regional geography. If you toss a rock in Western Europe, you’ve got a better chance of hitting a football fan than someone who wants to go bouldering or train for a marathon.
>If you toss a rock in Western Europe, you’ve got a better chance of hitting a football fan than someone who wants to go bouldering or train for a marathon.
Yes and no. If you HAVE to choose a specific hobby, football will have more chances than others; but it will still work in a minority of cases and assuming carries an implication.
A comparison I could make is starting a conversation in the US with 'did you watch fox news yesterday?'. Out of all channels, it's the most watched one; but you still have high chances of asking a non-viewer, and then get hit by negative connotations.
Personal hobbies are much better topic for various reasons (you don't assume, people will naturally be exited about discussing their own, etc).
Yes, Henry Ford had Nazi sympathies. But VW was literally founded by the Nazis:
> "Volkswagen was established in 1937 by the German Labour Front (German: Deutsche Arbeitsfront) as part of the Strength Through Joy (German: Kraft durch Freude) program in Berlin" [0]
> "The German Labour Front (German: Deutsche Arbeitsfront, pronounced [ˌdɔʏtʃə ˈʔaʁbaɪtsfʁɔnt]; DAF) was the national labour organization of the Nazi Party, which replaced the various independent trade unions in Germany during the process of Gleichschaltung or Nazification." [1]
reply