This is a funny thread to me because my frustration is at the intersection of your comments: I keep wanting sqlite for writes (and lookups) and duckdb for reads. Are you aware of anything that works like this?
Aha! That makes so much sense. Thank you for this.
Edit: Ah, right, the downside is that this is not going to have good olap query performance when interacting directly with the sqlite tables. So still necessary to copy out to duckdb tables (probably in batches) if this matters. Still seems very useful to me though.
Analytics is done in "batches" (daily, weekly) anyways, right?
We know you can't get both, row and column orders at the same time, and that continuously maintaining both means duplication and ensuring you get the worst case from both worlds.
Local, row-wise writing is the way to go for write performance. Column-oriented reads are the way to do analytics at scale. It seems alright to have a sync process that does the order re-arrangement (maybe with extra precomputed statistics, and sharding to allow many workers if necessary) to let queries of now historical data run fast.
It's not just about row versus column. OLAPs are potentially denormalised as well, and sometimes pre-aggregation, such as rolling up by day, by customer.
If you really need to get performance you'll be building a star schema.
Not all olap-like queries are for daily reporting.
I agree that the basic architecture should be row order -> delay -> column order, but the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.
I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?
I mean, this isn't that different conceptually from the architecture of log-structured merge trees, which have this same kind of "duplication" but for good purpose. (Indeed, rocksdb has been the closest thing to what I want for this workload that I've found; I just think it would be neat if I could use sqlite+duckdb instead, accepting some tradeoffs.)
> the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.
I see. Can you come up with row/table watermarks? Say your column store is up-to-date with certain watermark, so any query that requires freshness beyond that will need to snoop into the rows that haven't made it into the columnar store to check for data up to the required query timestamp.
In the past I've dealt with a system that had read-optimised columnar data that was overlaid with fresh write-optimised data and used timestamps to agree on the data that should be visible to the queries. It continuously consolidated data into the read-optimised store instead of having the silly daily job that you might have in the extremely slow cadence reporting job you mention.
You can write such a system, but in reality I've found it hard to justify building a system for continuous updates when a 15min delay isn't the end of the world, but it's doable if you want it.
> I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?
I mean that if you want both views in a consistent world, then writes will bring things to a crawl as both, row and column ordered data needs to be updated before the writing lock is released.
Yes! We're definitely talking about the same thing here! Definitely not thinking of consistent writes to both views.
Now that you said this about watermarks, I realize that this is definitely the same idea as streaming systems like flink (which is where I'm familiar with watermarks from), but my use cases are smaller data and I'm looking for lower latency than distributed systems like that. I'm interested in delays that are on the order of double to triple digit milliseconds, rather than 15 minutes. (But also not microseconds.)
I definitely agree that it's difficult to justify building this, which is why I keep looking for a system that already exists :)
I think you could build an ETL-ish workflow where you use SQLite for OLTP and DuckDB for OLAP, but I suppose it's very workload dependent, there are several tradeoffs here.
Right. This is what I want, but transparently to the client. It seems fairly straightforward, but I keep looking for an existing implementation of it and haven't found one yet.
It's not bad if you need something quick. I haven't had a large need of ANN in duckdb since it's doing more analytical/exploratory needs, but it's definitely there if you need it.
From my perspective - do you even need a database?
SQLite is kind-of the middle ground between a full fat database, and 'writing your own object storage'. To put it another way, it provides 'regularised' object access API, rather than, say, a variant of types in a vector that you use filter or map over.
As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each?
After 2 years in production with a small (but write heavy) web service... it's a mixed bag. It definitely does the job, but not having a DB server does have not only benefits, but also drawbacks. The biggest being (lack of) caching the file/DB in RAM. As a result I have to do my own read caching, which is fine in Rust using the mokka caching library, but it's still something you have to do yourself, which would otherwise come for free with Postgres.
This of course also makes it impossible to share the cache between instances, doing so would require employing redis/memcached at which point it would be better to use Postgres.
It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.
How would caching on the db layer help with your web service?
In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.
As I said, my app is write heavy. So there are several separate processes that constantly write to the database, but of course, often, before writing, they need to read in order to decide what/where to write. Currently they need to have their own read cache in order to not clog the database.
The "web service" is only the user facing part which bears the least load. Read caching is useful there too as users look at statistics, so calculating them once every 5-10 minutes and caching them is needed, as that requires scanning the whole database.
A CDN is something I don't even have. It's not needed for the amount of users I have.
If I was using Postgres, these writer processes + the web service would share the same read cache for free (coming from Posgres itself). The difference wouldn't be huge if I would migrate right now, but now I already have the custom caching.
Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.
If your writes are fast, doing them serially does not cause anyone to wait.
How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve
1000 * 60 * 60 * 24 = 86 million users
And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...
> If your writes are fast, doing them serially does not cause anyone to wait.
Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?
Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.
Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.
I agree...there is scale and then there is scale. And then there is scale like Facebook.
We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02
It goes much further than that.. a single moderately sized VPS web server can handle millions of hard-to-cache requests per day, all hitting the db.
Most will want to use a managed db, but for a real basic setup you can just run postgres or mysql on the same box. And running your own db on a separate VPS is not hard either.
That depends on the use case. HN is not a good example. I am referring to business applications where users submit data. Ofc in these cases we are looking at 00s not millions of users. The answer is good enough.
Turns out a lot when you have things like "last accessed" timestamps on your models.
Really depends on the app
I also don't think that calculation is valid. Your users aren't going to be purely uniformly accessing the app over the course of a day. Invariably you'll have queuing delays above a significantly smaller user count (but maybe the delays are acceptable)
Pardon my ignorance, yet wasn't the prevailing thought a few years ago that you would never use SQLite in production? Has that school of thought changed?
SQlite as a database for web services had a little bit of a boom due to:
1. People gaining newfound appreciation of having the database on the same machine as the web server itself. The latency gains can be substantial and obviously there are some small cost savings too as you don't need a separate database server anymore. This does obviously limit you to a single web server, but single machines can have tons of cores and serve tens of thousands of requests per second, so that is not as limiting as you'd think.
2. Tools like litestream will continuously back up all writes to object storage, so that one web server having a hardware failure is not a problem as long as your SLA allow downtimes of a few minutes every few years. (and let's be real, most small companies for which this would be a good architecture don't have any SLA at all)
3. SQLite has concurrent writes now, so it's gotten much more performant in situations with multiple users at the same time.
So for specific use cases it can be a nice setup because you don't feel the downsides (yet) but you do get better latency and simpler architecture. That said, there's a reason the standard became the standard, so unless you have a very specific reason to choose this I'd recommend the "normal" multitier architectures in like 99% of cases.
Just to clarify: Unless I've missed something, this is only with WAL mode and concurrent reads at the same time as writes, I don't think it can handle multiple concurrent writes at the same time?
As I understand it, there can be concurrent writes as long as they don't touch the same data (the same file system pages, to be exact). Also, the actual COMMIT part is still serialized and you need to begin your transactions with BEGIN CONCURRENT. If two transactions do conflict, the later one will be forced to ROLLBACK although you can still try again. It is up to the application to do this.
Also just a note: BEGIN CONCURRENT is not in mainline SQLite releases. You need to build your own from a branch. Not a huge deal but just something to note.
I’m a fan of SQLite but just want to point out there’s no reason you can’t have Postgres or some other rdbms on the same machine as the webserver too. It’s just another program running in the background bound to a port similar to the web server itself.
SQLite is likely the most widely used production database due to its widespread usage in desktop and mobile software, and SQLite databases being a Library of Congress "sustainable format".
"Production" can mean many different things to different people. It's very widely used as a backend strutured file format in Android and iOS/macOS (e.g. for appls like Notes, Photos). Is that "production"? It's not widely used and largely inappropriate for applications with many concurrent writes.
Sqlite docs has a good overview of appropriate and inappropriate uses: https://sqlite.org/whentouse.html
It's best to start with Section 2 "Situations Where A Client/Server RDBMS May Work Better"
The reason you heard that was probably because they were talking about a more specific circumstance. For example SQLite is often used as a database during development in Django projects but not usually in production (there are exceptions of course!). So you may have read when setting up Django, or a similar thing, that the SQLite option wasn't meant for production because usually you'd use a database like Postgres for that. Absolutely doesn't mean that SQLite isn't used in production, it's just used for different things.
Only for large scale multiple user applications. It’s more than reasonable as a data store in local applications or at smaller scales where having the application and data layer on the same machine are acceptable.
If you’re at a point where the application needs to talk over a network to your database then that’s a reasonable heuristic that you should use a different DB. I personally wouldn’t trust my data to NFS.
This, though I think other posters have pointed to a web app/site that’s backed by SQLite. It can be a perfectly reasonable approach, I think, as the application is the web server and it likely accesses SQLite on the same machine.
FWIW (and this is IMHO of course) DuckDB makes working with random JSON much nicer than SQLite, not least because I can extract JSON fields to dense columnar representations and do it in a deterministic, repeatable way.
The only thing I want out of DuckDB core at this point is support for overriding the columnar storage representation for certain structs. Right now, DuckDB decomposes structs into fields and stores each field in a column. I'd like to be able to say "no, please, pre-materialize this tuple subset and store this struct in an internal BLOB or something".
I would say SQLite when possible, PostgreSQL (incl. extensions) when necessary, DuckDB for local/hobbyist data analysis and BigQuery (often TB or PB range) for enterprise business intelligence.
It's the standard for mobile. That said, in server-side enterprise computing, I know no one who uses it. I'm sure there are applications, but in this domain you'd need a good justification for not following standard patterns.
I have used DuckDB on an application server because it computes aggregations lightning fast which saved this app from needing caching, background services and all the invalidation and failure modes that come with those two.
Among people who can actually code (in contrast to just stitch together services), I see it used all around.
For someone who openly describes his stack and revenue, look up Pieter Levels, how he serves hundreds of thousands of users and makes millions of dollars per year, using SQLite as the storage layer.
Do you have a specific use case you're curious about? It's the most widely deployed database software of all time. https://sqlite.org/mostdeployed.html
I don't have a use case for it. I've used it a tiny bit for mocking databases in memory, but because it's not fully Postgres, I've switched entirely to TestContainers.
There is much innovation, hacking, etc. in "regular software jobs". Many companies that get launched are about improving efficiency or solving problems that these "regular software jobs" face. Once a startup grows, the product may continue to be interesting and new, but the day to day for the engineers building it begins to resemble a "regular software job".
Agreed, although the issue is that it's damn difficult to find something real that is at the intersection of: (1) pleasing users, (2) making them pay, (3) actually being useful, (4) being possible to get off the ground without a multi-year investment in time or money, and (5) remaining profitable or even revenue-generating for at least say three years before competition or evolution gets to it. It's a lot easier to hack something nice when you don't have to sell it.
That was back then when we were in our infancy as an industry and everything was about spitting out some cool graphics in less than 100 lines of JavaScript
Right now(specially looking at the world economy) It's all about getting yourself a nice, stable placement.
I haven't stopped hacking away, but I need an income
This is cool. I am currently using GitHub codespaces and I would love a version of it with nothing but a web based terminal. I don't need all the other windows they put around it. This might be it.
Trying my way around it now. Not sure what is going on:
me: apt install apache
the shell: exe.dev repl: command not found: "apt"
What is "exe.dev repl"? Am I not in a shell?
me: bash
the shell: exe.dev repl: command not found: "bash"
[exe.dev co-founder here] Hi there, I am not sure exactly where you are, but your VM is ubuntu derived and definitely starts with apt and bash. Perhaps try `ssh yourvm.exe.xyz`?
While at tailscale you built sketch.dev only to actually build this product ? Love it. Ultimate yak shave.
Kind of how like Antithesis was the product inside foundationdb.
Hmm.. so the public channel is decentralized but the private channel is not.
There is actually a technical solution to that then. Use the public channel to send/receive private messages. Every could publish a public key. Then everyone could send private messages to everyone by encrypting them with the public key of the receiver and sending them over the public channel.
Shall we try it? My public key:
-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAKs9CbOAxSROEdm/+QGyDLdxITTq+YdbmIlOM0jemqKvLXinnBUDeDRSGXOoCnygXLFsm6R31szySqiVunasX/8CAwEAAQ==
-----END PUBLIC KEY-----
You can send me a private message by encrypting it here:
Although I enjoy the public key/private key ideas, If you wish to talk encrypted, one of the best ways to do such could be having signal if you don't mind centralization
But if you want decentralization some options i can recommend are matrix,simplex,session etc.
But to be honest, there is a good point that you raise about how to talk decentralized on bluesky
well, one of the ideas that I can think of right now, is that someone can use https://keyoxide.org/ and paste in their public key and also connect both bluesky and matrix and then have the keyoxide as part of something public like a comment
The problem in this is that it becomes tedious and does add more friction to the whole thing but definitely possible.
If you choose to use a centralized frontend to access Bluesky (everyone does this) and that frontend has to follow laws because it's run by a corporation... that's what you get.
Since even after 2 hours nobody is discussing the actual font, let me tell you what comes to my mind when I read anything about Google and design:
They got phone design right.
I just can't get my head around it that even Apple, which is supposed to be THE design company, is making phones that can't lay on a table without wobbling like a barstool on a crooked floor. It just feels so broken to me. So detrimental to my sense of aesthetics.
Google phones tackled it with an elegant solution. Thanks for that. I wouldn't know what phone to use if Pixels didn't exist.
Apple probably has swathes of real-world usability data showing that virtually no-one uses their phone for prolonged periods of time while it's laying down on a hard flat surface.
You may be right about the aesthetics (and Lord Jobs may well have agreed with you) but they may have made the tradeoff consciously.
One can say "they probably had data to support it" about virtually any decision. It is not really a defense from critique. It may have been deliberate, but it still feels wrong and bad.
I don't think there's a single modern smartphone that I like. My latest favourite smartphone was iPhone 4S. No camera bump. Perfect size, fits well in my hand, operable with one thumb. Perfect display size, enough to present all information I need. Perfectly usable without ugly case.
Why would you buy an ugly case and not a clean and well designed, functional one ?
If you liked the original iPhone design, getting a rounded and hand fitting case would be the go too IMHO (on the size difference, there's no way out at this point)
My previous phone was the iphone 8. It’s trully a world of difference in usability compared to the iPhone 13 I’m using now. I have big hands, so I can ise the latter one-handed, but a lot of people I’ve seen don’t.
My favorite phone of all time (based on hand feel and appearance) was the OnePlus One. It had its software problems, but every phone I've held since then has been a disappointment in the hand.
I've just got a new Samsung and it's wobbling too. I hate this. Why can't they at least put the cameras in the middle? Or maybe horizontally centred? Or they could just put another bumper on the other side to make it symmetrical. I'm looking for a cover to balance this out, but no luck so far.
> I just can't get my head around it that even Apple, which is supposed to be THE design company, is making phones that can't lay on a table without wobbling like a barstool on a crooked floor. It just feels so broken to me. So detrimental to my sense of aesthetics.
Of all the controversial design choices, I think Apple got this one right.
I do not care if my phone wobbles when flat on the desk. I don’t use my phone like that. It’s in my hand if I’m using it.
I use my phone camera sparingly, but when I pull it out I want it to work very well. And it does. If it takes a little bump out to fit better optics then I don’t care in the slightest.
> Google phones tackled it with an elegant solution. Thanks for that. I wouldn't know what phone to use if Pixels didn't exist.
Making your entire phone choice revolve around the shape of the camera island is the oddest top priority I’ve heard yet, but I’m glad you found one that works for you.
Wasn't meant to be rude. More confused, because it's really a unique criteria to pick a phone by.
I think HN mostly doesn't appreciate any defense of Apple or other large companies. I really should stay away from any threads that turn into collections of complaints about big companies because the audience they draw is only interested in negative comments about the companies.
Some of these companies are now designing the phone on the assumption you're going to case it. No other reason to make a Pixel camera bump w/ scratch-vulnerable screen stick out so far.
If a phone needs a case, then phones should be sold with a case included. I hate cases and have never put one on my phone--and have never had a phone break or crack.
I'd rather have a wobbly phone (how often do you push on your screen when it is flat on the table?) and a proper OS than a proper phone and a wobbly OS.
Gesture navigation on Android was introduced half a decade ago and it is still broken. In most apps my edge swipe to pull out a drawer or a swipe on the right side to 'forward' are still detected as back button swipes. Editing details at the edge of a photo often gets detected as a back button swipe. Ridiculous.
2°) Android followed UX/UI 101 about where to put frequently used buttons: where you can reach them with your thumb. Basic design, right ?
Apple iOS: the close/back button is usually on the top left corner, unreachable by right-handed users that only constitutes 90% of people, number about the same in all countries and cultures. That's only one example, but that bag where it comes from is deep.
You should take a few steps back before displaying publicly polarizing opinions and maybe nuance your words a bit.
1) that’s like saying good UX is entirely optional - sure it is but users will still complain
2) disregarding another blatant discrimination of left-handed users: I switch a couple times per week between android and iOS devices for various reasons and the android UX is so janky and unintuitive it hurts - it might just be my particular device and it’s much better in other cases.
This might be extremely polarising but I agree with GP.
It is the default on all modern Android flavors and the overwhelming majority (>90%) of users sticks with defaults. It is likely Google is going to deprecate the navigation bar within a couple of Android versions.
> Apple iOS: the close/back button is usually on the top left corner, unreachable
You clearly never used iOS, because you just backswipe. You rarely if ever touch back buttons.
Not that I disagree although you're fighting the wrong fight. The big problem is controls being on the top instead of the bottom. Neither Apple nor Google has attempted to fix this, only Samsung partially has with OneUI. And they can't force developers to adhere to "content top, controls bottom". Ironically enough Apple had this fixed until iOS.. 12? From 7-12, the control center was at the bottom. All they had to was move the notification centre there and figure out a way to make it compatible with a gesture bar.
> right-handed users that only constitutes 90% of people
People tend to one-hand their phone with their non-dominant hand to keep their dominant hand usable.
> You should take a few steps back before displaying publicly polarizing opinions and maybe nuance your words a bit.
I use and develop for both platforms. You just sound like an angry, unknowledgeable fanboy.
Perhaps take heed to your own advice :+)
Edit: if you want an example of something that Android does way better: notification management via notification categories. I get to disable stupid promotional or "typing.." notification categories from an app, whilst maintaining functional ones. iOS should take a page from Android there.
The wobbling is the worst part of the hardware on my iPhone mini, annoys me probably more than fifty times per week.
Because I often unlock it when it is on the desk I also miss Touch ID a lot, because with Face ID I also have to lean forward every time for it to recognise me.
Too bad Pixel support for factory-broken screens sucks so my "well designed" Pixel has green vertical line in the middle of the screen. So detrimental to my sense of aesthetics.
vulnerable to remote code execution from
systems on the same network segment
Isn't almost every laptop these days autoconnecting to known network names like "Starbucks" etc, because the user used it once in the past?
That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.
There is nothing wrong with using public networks. It's not 2010 anymore. Your operating system is expected to be fully secure[1] even when malicious actors are present in your local network.
[1] except availability, we still can't get it right in setups used by regular people.
And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?
Why does it matter? I mean I guess it did in this case but that is considered a top priority bug and quickly fixed.
I guess my point is the way the internet works is that your traffic goes through a number of unknown and possibly hostile actors on it's way to the final destination. Having a hostile actor presenting a spoofed wifi access point should not affect your security stance in any way. Either the connection works and you have the access you wanted or it does not. If you used secure protocols they are just as secure and if you used insecure protocols they are just as insecure.
Now having said that I will contradict myself, we are used to having our first hop be a high security trusted domain and tend to be a little sloppy there even when it is not. but still in general it does not matter. A secure connection is still a secure connection.
Hmm. Are you sure that your stack wouldn't accept these discovery packets until after you've successfully authenticated (which is what those chains are for) ?
Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common
sense - why not have a single network?
But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?
I may be missing something, but it is almost a guarantee that you would not receive a RA in this scenario? eduroam is using WPA2/WPA3 enterprise, so my understanding is that until you authenticate to the network you do not have L2 network access.
Additionally, eduroam uses certificate auth baked into the provisioning profile to ensure you are authenticating using your organizations IdP. (There are some interesting caveats to this statement that they discuss in https://datatracker.ietf.org/doc/html/rfc7593#section-7.1.1 and the mitigation is the usage of Private CAs for cert signing).
As someone using Linux to build web applications, I wonder what about the Apple ecosystem could make it worth to have such a Damocles’ sword hanging over me my whole life.
Am I missing something? My current perspective is that not only am I free of all the hassle that comes with building for a closed ecosystem, such as managing a developer account and using proprietary tools, it also comes with much harder distribution. I can put up a website with no wait time and everybody on planet earth can use it right away. So much nicer than having to go through all the hoops and limitations of an app store.
Honest question: Am I missing something? What would I get in return if I invested all the work to build for iOS or Mac?
Plenty of things do work better as a native application. Packaging is a pain across the board nowadays. Apple is pretty good, you pay a yearly fee if you want your executable signed and notorized, but they make it very hard to run without that (for the lay person). Windows can run apps without them being signed but it gives you hell and the signing process is awful and expensive. Linux can be a packaging nightmare.
And that website is hosted somewhere, you’re using several layers of network providers, the registrar has control over your domain, the copper in the ground most likely has an easement controlling access to it so your internet provider literally can just cut off access to you whenever they want, if you publish your apps to a registry the registry controls your apps as well.
There are so many companies that control access to every part of your life. Your argument is meaningless because it applies to _everything_.
A trustless society is not one that anyone should want to be a part of. Regulations exist for a reason.
Not wanting centralization under one company does not equal advocating for "trustless society".
All the things you mentioned (registrars, ISPs, registries, etc) have multiple alternative providers you can choose from. Get cut off from GCP, move to AWS. Get banned in Germany, VPS in Sweden. Domain registration revoked, get another domain.
Lose your Apple ID, and you're locked out of the entire Apple ecosystem, permanently, period.
Even if a US federal court ordered that you could never again legally access the internet, that would only be valid within the US, and you could legally and freely access it by going to any other country.
So in fact, rather than everything being equivalent to Apple's singular control, almost nothing is equivalent (really, only another company with a similarly closed ecosystem).
If aws decided to block your access to their ecosystem you would lose so so so much more than Apple blocking your access to theirs. If the US decided what you said, t1 networks would restrict your access across much of the planet.
Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).
But of course those providers can also cut you off, so what I said still applies.
First off, AWS cutting off your AWS account does not block you from visiting other websites that use AWS, it just means you can't use AWS itself as a customer. Apple's ecosystem OTOH means that OP's issue with iCloud disabled their account globally across all Apple services, not just within iCloud itself (and in fact, to further illustrate the difference, losing access to your AWS console account doesn't cut off your account for Amazon.com shopping).
> Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).
The person above was asking about why they *as a developer* would want to risk their time and effort developing for iOS. Any work developing for iOS in e.g. switft or objective-c, is not portable for other platforms like Android. If they lose their Apple account, any time they spent developing for iOS-specific frameworks is totally wasted, is their point.
> If the US decided what you said, t1 networks would restrict your access across much of the planet.
No offense, but you have no clue what you're talking about. There are in fact court orders where internet access is restricted as part of criminal sentencing. Here's a quick example guide [1]. No part of that involves network providers cutting you off.
How on earth do you imagine a "t1 network" provider would determine that a person using their network from the UK is actually a person from the US with a court order against using their network? And to be clear, the court orders don't compel ISPs to restrict access, or attempt to enforce blocks like you are suggesting.
If you're full in Apple ecosystem, like my GF, you get:
- Shared clipboard across devices
- Shared documents
- Shared browser
- Shared passwords
- Free, quality office suite
- Interoperable devices (use iPhone as camera on Mac, for example)
- Payments across different devices (use clock to pay, for example, shared with your iPhone)
All of this with just one account without any third-party service.
And billion of things more, probably, I'm not a full Apple head.
In the rare case (maybe once per month or so) where that happens, I start a script on my laptop that starts a webapp both the phone and the laptop can open in their browser and send text to each other.
The overhead of starting it and typing "laptop.tekmol" into the browser on both machines is only a few seconds.
That seems mich saner to me than to constantly have some interaction between the two devices going on.
The standard argument here is that the maintainers of the core technology are likely to do a better job of hosting it because they have deeper understanding of how it all works.
Hosting is a commodity. Runtimes are too. In this case, the strategy is to make a better runtime, attract developers, and eventually give them a super easy way to run their project in the cloud. Eg: bun deploy, which is a reserved no op command. I really like Buns DX.
Well, if they suddenly changed the license, we'd get a new Redis --> Valkey situation.
Or even more recently, look at minio no longer maintaining their core open source project!
I mean if you're getting X number of users per day and you don't need to pay for bandwidth or anything, there's gotta be SOME way to monetize down the line.
If your userbase or the current CEO likes it or not.
No, but faced with either a loss or a modest return, they'll take the modest return (unless it's more beneficial to not come tax season). Unicorns are called unicorns for a reason.
1: Moving everything to SQLite
2: Using mostly JSON fields
Both started already a few years back and accelerated in 2025.
SQLite is just so nice and easy to deal with, with its no-daemon, one-file-per-db and one-type-per value approach.
And the JSON arrow functions make it a pleasure to work with flexible JSON data.
reply