I’ve been using NFS in various environments since my first introduction to it in my university’s Solaris and Linux labs. I’ve run it at home, on and off, since 2005.
I’ve recently started using it again after consistent issues with SMB on Apple devices, and the deprecation of AFP. My FreeBSD server, running on a Raspberry Pi, makes terabytes of content available to the web via an NFS connection to a Synology NAS.
For my use case, with a small number of users, the fact that NFS is host based rather than user based, means I can set it up one on each device, and all users of that host can access the shares. And I’ve generally found it to be more consistently performant on Apple hardware than their in-house SMB implementation.
I'm sure the architecture and scale of NetFlix's operations is truly impressive, but stories like this make me further appreciate the elegant simplicity of scalability of analogue terrestrial TV, and to a similar extent, digital terrestrial TV and satellite.
Australian Aboriginal people, have very advanced social and cultural structure. Australia is comprised of several thousand different aboriginal groups and territories. Acceptance into another community relied on endorsement from the elders of the community you were leaving. Family bonds and ties are extremely strong.
Hence, one explanation may be that participation was essentially mandatory to be considered part of the community at all, and to be recognised as an adult.
These are all problems that shouldn’t exist. You have succinctly described the problems with modern IT. Software doesn’t need to have an expiration date. It doesn’t decay or expire. But because of our endless need to change things, rather than just fix bugs, we end up with this precarious tower of cards.
If, as an industry, we focussed on correctness and reliability over features, a lot of these problems would disappear.
But the hardware does expire. Computers aren't just magically "faster" than they were decades ago; they're altogether different under the hood. An immense number of abstractions have held up the image of stability but the reality is that systems with hundreds of cores, deep and wide caches, massively parallel SSDs and NICs, etc. require specialized software compared to their much simpler predecessors to be used effectively. Feature bloat is a major annoyance, and running the old software on new hardware can give the appearance of being much faster, until it locks everything up, or takes forever to download a file, or can't share access to a resource, or thinks it has run out of RAM, or chews up a whole CPU core doing nothing, etc.
One of the big strength of the Web is the commitment of Mozilla „do not break the web“.
But this is hitting its limits, because the scope of Javascript is being expanded more and more (including stuff like filesystem APIs for almost arbitrary file access — as if we had not learned from Java WebStart that that’s a rabbit hole that never stops gifting vulnerabilities), so to keep new features safe despite their much higher security needs, old features are neutered.
I lost a decentralized comment system to similar changes.
I agree there's some truth in what you say. I do think these upgrades are part of a path towards correctness and reliability (bug fixes, security vulnerabilities, etc).
This. Furthermore, this posture has percolated down to home computing environments (because it is all Windows or Linux) so even my home computer has to receive constant updates as if it’s controlling a Luna lander.
In an application like Teams, the delay between striking a key on the keyboard and the corresponding glyph appearing on screen is comically bad - two orders of magnitude higher than performing the equivalent action on a computer from the early 1980s.
I too am nostalgic for a simpler web, and also an increasingly of the view that the modern web was a severe wrong turn for software engineering and computing generally. I also run my own gopherd and httpd servers. Despite the severe shortcomings of Gopher as a protocol in 2023, I simply cannot understand much of the design rationale behind Gemini.
Markdown is not a well-defined standard and is arguably no easier to parse than basic HTML 1.0. HTML can easily be rendered as text in a style not dissimilar to raw markdown if required.
The decision to enforce the use of TLS means that rolling your own client is less trivial than had it been optional. It also cuts out a big chunk of the hobbyist market who are most likely to be interested in a small, lightweight protocol. Support for Gemini will therefore be limited to systems with maintained TLS libraries. At the end of the day, anyone sniffing your Gemini traffic is going to be able to see the host you’re accessing - whilst TLS will preventable them knowing which specific page you’re reading, they will be aware of the set of pages you could be reading.
Instead, I am focussing my attention on building small, light web-pages that have completely optional CSS, are pure HTML without cookies or JavaScript and work in browsers modern or ancient. There’s a plethora of great browsers that can access it, from Lynx and Netscape running on a Solaris 8 machine to Chrome on my work PC. Servers are ample and well-tested.
And a good chunk of my website is text (rendered in Groff) to boot.
It’s 95% of the Gemini experience with 5% of the effort and 1000% more reach.
I find it curious that Musk has simultaneously demanded that all employees cease working from home and return to the office whilst also locking them out of the office.
Yes, but only because of the way Musk went about changing the WFH Policy. Musk is quoted as saying "If you can physically make it to an office and you don't show up, resignation accepted". He was placing the onus onto employees to make it to an office at all costs, else they would be out of a job. To then lock employees out after such histrionic demands seems ironic.
I’ve recently started using it again after consistent issues with SMB on Apple devices, and the deprecation of AFP. My FreeBSD server, running on a Raspberry Pi, makes terabytes of content available to the web via an NFS connection to a Synology NAS.
For my use case, with a small number of users, the fact that NFS is host based rather than user based, means I can set it up one on each device, and all users of that host can access the shares. And I’ve generally found it to be more consistently performant on Apple hardware than their in-house SMB implementation.