I took read it this year and invested about 8 weeks getting through it. I found the story disjointed, repetitive and very hard to follow. It was only after finishing it I discovered I had read the abridged version that cuts out a number of chapters leaving multiple characters without conclusions. No wonder it was difficult to follow.
The free market can sort something like this out, but it requires some things to work. There need to be competitors offering similar products, people need to have the ability to switch to using those competitors, and they need to be able to get information about the strengths and weaknesses of the different offerings (so they can know their current vendor has a problem and that another vendor doesn't have that problem). The free market isn't magic, but neither are business regulations. Both have failure modes you have to guard against.
This is a non sequitur. I know how to self host my infra, but I’ve been using cloud services for the last 15 years because it means I don’t have to deal with self hosting my infra. It runs completely by itself (mostly managed services, including k8s) and the only time I need to deal with it is when I want to change something.
BTW you can of course self-host k8s, or dokku, or whatnot, and have as easy a deployment story as with the cloud. (But not necessarily as easy a maintenance story for the whole thing.)
For a tinkerer who's focused on the infra, then sure, hosting your own can make sense. But for anyone who's focused on literally anything else, it doesn't make any sense.
I have found Claude Code is a great help to me. Yes, I can and have tinkered a lot over the decades, but I am perfectly happy letting Claude drive the system administration, and advise on best practices. Certainly for prototype configurations. I can install CC on all VPSes and local machines. NixOS sounds great, but the learning curve is not fun. I installed the CC package from the NixOS unstable channel and I don't have to learn the funky NixOS packaging language. I do have to intervene sometimes as the commands go by, as I know how to drive, so maybe not a solution for true newbies. I can spend a few hours learning how to click around in one of the cloud consoles, or I can let CC install the command line interfaces and do it for me. The $20/mo plan is plenty for system administration and if I pick the haiku model, then CC runs twice as fast on trivial stuff like system administration.
Let's take an example: a managed database, e.g. Postgres or MySQL, vs. a self-hosted one. If you need reasonable uptime, you need at least one read replica. But replication breaks sometimes, or something goes wrong on the master DB, particularly over a period of years.
Are you really going to trust Claude Code to recover in that situation? Do you think it will? I've had DB primaries fail on managed DBs like AWS RDS and Google Cloud SQL, and recovery is generally automatic within minutes. You don't have to lift a finger.
Same goes for something like a managed k8s cluster, like EKS or GKE. There's a big difference between using a fully-managed service and trying to replicate a fully managed system on your own with the help of an LLM.
Of course it does boil down to what you need. But if you need reliability and don't want to have to deal with admin, managed services can make life much simpler. There's a whole class of problems I simply never have to think about.
Cloud is not great for GPU workloads. I run a nightly workload that takes 6-8 hours to run and requires a Nvidia GPU, along with high RAM and CPU requirements. It can't be interrupted. It has a 100GB output and stores 6 nightly versions of that. That's easily $600+ a month in AWS just for that one task. By self-hosting it I have access to the GPU all the time for a fixed up front relatively low cost and can also use the HW for other things (I do). That said, these are all backend / development type resources, self hosting customer facing or critical things yourself is a different prospect, and I do use cloud for those types of workloads. RDS + EKS for a couple hundred a month is an amazing deal for what is essentially zero maintenance application hosting. My point is that "literally anything else" is extreme, as always, it is "right tool for the job".
It doesnt make any sense to you that I would like to avoid a potential 60K bill because of a configuration error? If youre not working at faang your employer likely cares too. Especially if its your own business you would care. You really can't think of _one_ case where self hosting makes any sense?
> It doesnt make any sense to you that I would like to avoid a potential 60K bill because of a configuration error?
This is such an imaginary problem. The examples like this you hear about are inevitably the outliers who didn't pay any attention to this issue until they were forced to.
For most services, it's incredibly easy to constrain your costs anyway. You do have to pay attention to the pricing model of the services you use, though - if a DDOS is going to generate a big cost for you, you probably made a bad choice somewhere.
> You really can't think of _one_ case where self hosting makes any sense?
Only if it's something you're interested in doing, or if you're so big you can hire a team to deal with that. Otherwise, why would you waste time on it?
Thinking about "constraining cost" is the last thing I want to do. I pay a fixed 200 dollars a month for a dedicated server and spend my time solving problems using code. The hardware I rent is probably overkill for my business and would be more than enough for a ton of businesses' cloud needs. If youre paying per GB of traffic, or disk space, or RAM, you're getting scammed. Hyperscalers are not the right solution for most people. Developers are scared of handling servers, which is why you're paying that premium for a hyperscaler solution. I SSH into my server and start/stop services at will, configure it any way i want, copy around anything I want, I serve TBs a week, and my bill doesnt change. You would appreciate that freedom if you had the will to learn something you didnt know before. Trust me its easier than ever with Ai!
> For a tinkerer who's focused on the infra, then sure, hosting your own can make sense.
... or for a big company. I've worked at companies with thousands of developers, and it's all been 'self hosted'. In DCs, so not rinky dink, but yes, and there's a lot of advantages to doing it this way. If you set it up right, it can be much easier for developers to use than AWS.
As someone who does tape recovery on very very old tape I largely concur with this with a couple of caveats.
1. Do not encrypt your tapes if you want the data back in 30/50 years. We have had so many companies lose encryption keys and turn their tapes into paperweights because the company they bought out 17 years ago had poor key management.
2. The typical failure case on tape is physical damage not bit errors. This can be via blunt force trauma (i.e. dropping, or sometimes crushing) or via poor storage (i.e. mould/mildew).
3. Not all tape formats are created equal. I have seen far higher failure rates on tape formats that are repeatedly accessed, updated, ejected, than your old style write once, read none pattern.
I suspect they do a radon transform of the paths to determine the infrared transmissibility value. Similar to how CT scans are constructed from 1000's of micro x-rays.
Not from North America. But I disagree that the exceptionalism started post WWII.
How do you explain the country's ability to perform civil engineering feats prior to WWII. The Erie Canal, Trans-continental Railway, Panama Canal, Brooklyn Bridge, Empire State Building and Golden Gate Bridge spring to mind as feats of engineering that few other country's (if any) could rival. There are obvious examples post WWII (Manhattan project, Apollo program, Interstate highway system), but for all of the USA's pitfalls, they do have an incredible history of civil engineering projects prior to WWII.
The US shifted their focus from domestic to international politics after WWII. They were brought in as arbitrators for world peace, and in a lot of ways, stepped up to the task. Military expansion and spending went through the roof and the Cold War and Vietnam didn't help build public trust in government to do big things at home. Behind the scenes though, politicians could work with other nations to organize the reality that we all live in today in the West. Later, politicians began organising free trade and technology became the next frontier. Why spend hundreds of millions on a bridge when I can send an email instead of a letter? The USA really is a "marvel" in the fact that most of her problems were caused and exacerbated by success and enough competent people in power to keep things moving.
It's hard to discuss the United States without mentioning Trump who believes that undermining the past 100 years of Neoliberalism will bring America back to her "glory days" while completely ignoring the reality on the ground that led from where they were then to where America is today.
So maybe there will be more public works projects in the future for America, but I fear that they will be more focused on appeasing dear leader instead of meaningfully improving the lives of the average American citizen. But until someone turns on the lights and shuts off the music, America will continue to spiral and cry about "unfairness" while her created reality crumbles due to lack of maintenance and care about the subtle realities on the ground that were once central to her rise in the first place.
Last year my company read in excess of 20,000 tapes from just about every manufacturer and software vendor. For modern, LTO/3592/T10000 era tapes the failure rate we see is around 0.3%.
Most of these failures are due to:
cartridges being dropped or otherwise physically deformed such that they do not fit into the drives anymore.
cartridges getting stuck in drives and destructive extraction being required.
Data was never written correctly in the first place.
The only exception to this rule that we have seen is tapes written with LTFS. These tapes have a 20 fold higher incidence of failure, we believe because reading data back, as if it was a HDD, causes excessive wear.
Anyone claiming 50% failure rates on tapes has no idea what they are talking about, are reading back tapes from the 1970s/80s or have a vested interest in getting people away from tape storage.
They're not saying the failure rate of tapes is 50%. They're saying if you survey attempts to do data restores from tape then 50% of the time not all the requested data is found.
I can't claim the same volumes you can but I did handle tape backups and recovery for a mid sized business for a few years. We only had one tape failure in my tenure but we had plenty of failed recoveries. We had issues like the user not knowing the name and location of the missing file with enough detail to find it, or they changed the file 6 times in one day and they need version 3 and the backup system isn't that granular.
Those are just the issues after the system was set up and working well. Plenty of people set a backup system running, never check the output, and are disappointed years later to learn the initial config was wrong.
Long story short 50% failure of tapes is ludicrous but 50% failure of recovery efforts is not.