Hacker Newsnew | past | comments | ask | show | jobs | submit | TekMol's commentslogin

From my perspective on databases, two trends continued in 2025:

1: Moving everything to SQLite

2: Using mostly JSON fields

Both started already a few years back and accelerated in 2025.

SQLite is just so nice and easy to deal with, with its no-daemon, one-file-per-db and one-type-per value approach.

And the JSON arrow functions make it a pleasure to work with flexible JSON data.


From my perspective, everything's DuckDB.

Single file per database, Multiple ingestion formats, full text search, S3 support, Parquet file support, columnar storage. fully typed.

WASM version for full SQL in JavaScript.


This is a funny thread to me because my frustration is at the intersection of your comments: I keep wanting sqlite for writes (and lookups) and duckdb for reads. Are you aware of anything that works like this?

DuckDB can read/write SQLite files via extension. So you can do that now with DuckDB as is.

https://duckdb.org/docs/stable/core_extensions/sqlite


My understanding is that this is still too slow for quick inserts, because duckdb (like all columnar stores) is designed for batches.

The way I understood it, you can do your inserts with SQLite "proper", and simultaneously use DuckDB for analytics (aka read-only).

Aha! That makes so much sense. Thank you for this.

Edit: Ah, right, the downside is that this is not going to have good olap query performance when interacting directly with the sqlite tables. So still necessary to copy out to duckdb tables (probably in batches) if this matters. Still seems very useful to me though.


Analytics is done in "batches" (daily, weekly) anyways, right?

We know you can't get both, row and column orders at the same time, and that continuously maintaining both means duplication and ensuring you get the worst case from both worlds.

Local, row-wise writing is the way to go for write performance. Column-oriented reads are the way to do analytics at scale. It seems alright to have a sync process that does the order re-arrangement (maybe with extra precomputed statistics, and sharding to allow many workers if necessary) to let queries of now historical data run fast.


It's not just about row versus column. OLAPs are potentially denormalised as well, and sometimes pre-aggregation, such as rolling up by day, by customer.

If you really need to get performance you'll be building a star schema.


Not all olap-like queries are for daily reporting.

I agree that the basic architecture should be row order -> delay -> column order, but the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.

I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?

I mean, this isn't that different conceptually from the architecture of log-structured merge trees, which have this same kind of "duplication" but for good purpose. (Indeed, rocksdb has been the closest thing to what I want for this workload that I've found; I just think it would be neat if I could use sqlite+duckdb instead, accepting some tradeoffs.)


> the question (in my mind) is balancing the length of that delay with the usefulness of column order queries for a given workload. I seem to keep running into workloads that do inserts very quickly and then batch reads on a slower cadence (either in lockstep with the writes, or concurrently) but not on the extremely slow cadence seen in the typical olap reporting type flow. Essentially, building up state and then querying the results.

I see. Can you come up with row/table watermarks? Say your column store is up-to-date with certain watermark, so any query that requires freshness beyond that will need to snoop into the rows that haven't made it into the columnar store to check for data up to the required query timestamp.

In the past I've dealt with a system that had read-optimised columnar data that was overlaid with fresh write-optimised data and used timestamps to agree on the data that should be visible to the queries. It continuously consolidated data into the read-optimised store instead of having the silly daily job that you might have in the extremely slow cadence reporting job you mention.

You can write such a system, but in reality I've found it hard to justify building a system for continuous updates when a 15min delay isn't the end of the world, but it's doable if you want it.

> I'm not so sure about "continuously maintaining both means duplication and ensuring you get the worst case from both worlds". Maybe you're right, I'm just not so sure. I agree that it's duplicating storage requirements, but is that such a big deal? And I think if fast writes and lookups and fast batch reads are both possible at the cost of storage duplication, that would actually be the best case from both worlds?

I mean that if you want both views in a consistent world, then writes will bring things to a crawl as both, row and column ordered data needs to be updated before the writing lock is released.


Yes! We're definitely talking about the same thing here! Definitely not thinking of consistent writes to both views.

Now that you said this about watermarks, I realize that this is definitely the same idea as streaming systems like flink (which is where I'm familiar with watermarks from), but my use cases are smaller data and I'm looking for lower latency than distributed systems like that. I'm interested in delays that are on the order of double to triple digit milliseconds, rather than 15 minutes. (But also not microseconds.)

I definitely agree that it's difficult to justify building this, which is why I keep looking for a system that already exists :)


I think you could build an ETL-ish workflow where you use SQLite for OLTP and DuckDB for OLAP, but I suppose it's very workload dependent, there are several tradeoffs here.

Right. This is what I want, but transparently to the client. It seems fairly straightforward, but I keep looking for an existing implementation of it and haven't found one yet.

very interesting. whats the vector indexing story like in duckdb these days?

also are there sqlite-duckdb sync engines or is that an oxymoron


https://duckdb.org/docs/stable/core_extensions/vss

It's not bad if you need something quick. I haven't had a large need of ANN in duckdb since it's doing more analytical/exploratory needs, but it's definitely there if you need it.


From my perspective - do you even need a database?

SQLite is kind-of the middle ground between a full fat database, and 'writing your own object storage'. To put it another way, it provides 'regularised' object access API, rather than, say, a variant of types in a vector that you use filter or map over.


If I would write my own data storage I would re-implement SQLite. Why would I want to do that?

Not sure if this is quite what you are getting at, but the SQLite folks even mention this as a great use-case: https://www.sqlite.org/appfileformat.html

As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each?

Any mitigation strategy for larger use cases?

Thanks in advance!


After 2 years in production with a small (but write heavy) web service... it's a mixed bag. It definitely does the job, but not having a DB server does have not only benefits, but also drawbacks. The biggest being (lack of) caching the file/DB in RAM. As a result I have to do my own read caching, which is fine in Rust using the mokka caching library, but it's still something you have to do yourself, which would otherwise come for free with Postgres. This of course also makes it impossible to share the cache between instances, doing so would require employing redis/memcached at which point it would be better to use Postgres.

It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.


How would caching on the db layer help with your web service?

In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.


As I said, my app is write heavy. So there are several separate processes that constantly write to the database, but of course, often, before writing, they need to read in order to decide what/where to write. Currently they need to have their own read cache in order to not clog the database.

The "web service" is only the user facing part which bears the least load. Read caching is useful there too as users look at statistics, so calculating them once every 5-10 minutes and caching them is needed, as that requires scanning the whole database.

A CDN is something I don't even have. It's not needed for the amount of users I have.

If I was using Postgres, these writer processes + the web service would share the same read cache for free (coming from Posgres itself). The difference wouldn't be huge if I would migrate right now, but now I already have the custom caching.


I am no expert, but SQLite does have in memory store? At least for tables that need it..ofc sync of the writes to this store may need more work.

Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.

You also need BEGIN CONCURRENT to allow simultaneous write transactions.

https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...


Why have multiple connections in the first place?

If your writes are fast, doing them serially does not cause anyone to wait.

How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve

    1000 * 60 * 60 * 24 = 86 million users
And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...

> If your writes are fast, doing them serially does not cause anyone to wait.

Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?


Because development and maintenance faster and easier to reason about. Increasing the chances you really get to 86 million daily active users.

So in this solution, you run the backend on a single node that reads/writes from an SQLite file, and that is the entire system?

Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.

Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.


I agree...there is scale and then there is scale. And then there is scale like Facebook.

We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02


It goes much further than that.. a single moderately sized VPS web server can handle millions of hard-to-cache requests per day, all hitting the db.

Most will want to use a managed db, but for a real basic setup you can just run postgres or mysql on the same box. And running your own db on a separate VPS is not hard either.


That depends on the use case. HN is not a good example. I am referring to business applications where users submit data. Ofc in these cases we are looking at 00s not millions of users. The answer is good enough.

>How often does the typical user write to the DB

Turns out a lot when you have things like "last accessed" timestamps on your models.

Really depends on the app

I also don't think that calculation is valid. Your users aren't going to be purely uniformly accessing the app over the course of a day. Invariably you'll have queuing delays above a significantly smaller user count (but maybe the delays are acceptable)


Pardon my ignorance, yet wasn't the prevailing thought a few years ago that you would never use SQLite in production? Has that school of thought changed?

SQlite as a database for web services had a little bit of a boom due to:

1. People gaining newfound appreciation of having the database on the same machine as the web server itself. The latency gains can be substantial and obviously there are some small cost savings too as you don't need a separate database server anymore. This does obviously limit you to a single web server, but single machines can have tons of cores and serve tens of thousands of requests per second, so that is not as limiting as you'd think.

2. Tools like litestream will continuously back up all writes to object storage, so that one web server having a hardware failure is not a problem as long as your SLA allow downtimes of a few minutes every few years. (and let's be real, most small companies for which this would be a good architecture don't have any SLA at all)

3. SQLite has concurrent writes now, so it's gotten much more performant in situations with multiple users at the same time.

So for specific use cases it can be a nice setup because you don't feel the downsides (yet) but you do get better latency and simpler architecture. That said, there's a reason the standard became the standard, so unless you have a very specific reason to choose this I'd recommend the "normal" multitier architectures in like 99% of cases.


> SQLite has concurrent writes now

Just to clarify: Unless I've missed something, this is only with WAL mode and concurrent reads at the same time as writes, I don't think it can handle multiple concurrent writes at the same time?


As I understand it, there can be concurrent writes as long as they don't touch the same data (the same file system pages, to be exact). Also, the actual COMMIT part is still serialized and you need to begin your transactions with BEGIN CONCURRENT. If two transactions do conflict, the later one will be forced to ROLLBACK although you can still try again. It is up to the application to do this.

See also https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...

This type of limitation is exactly why I would recommend "normal" server-based databases like Postgres or MySQL for the vast majority of web backends.


Also just a note: BEGIN CONCURRENT is not in mainline SQLite releases. You need to build your own from a branch. Not a huge deal but just something to note.

I think only Turso — SQLite rewritten in Rust — supports that.

I’m a fan of SQLite but just want to point out there’s no reason you can’t have Postgres or some other rdbms on the same machine as the webserver too. It’s just another program running in the background bound to a port similar to the web server itself.

SQLite is likely the most widely used production database due to its widespread usage in desktop and mobile software, and SQLite databases being a Library of Congress "sustainable format".

Most of the usage was/is as a local ACID-compliant replacement for txt/ini/custom local/bundled files though.

"Production" can mean many different things to different people. It's very widely used as a backend strutured file format in Android and iOS/macOS (e.g. for appls like Notes, Photos). Is that "production"? It's not widely used and largely inappropriate for applications with many concurrent writes.

Sqlite docs has a good overview of appropriate and inappropriate uses: https://sqlite.org/whentouse.html It's best to start with Section 2 "Situations Where A Client/Server RDBMS May Work Better"


The reason you heard that was probably because they were talking about a more specific circumstance. For example SQLite is often used as a database during development in Django projects but not usually in production (there are exceptions of course!). So you may have read when setting up Django, or a similar thing, that the SQLite option wasn't meant for production because usually you'd use a database like Postgres for that. Absolutely doesn't mean that SQLite isn't used in production, it's just used for different things.

You are right. Thanks!

Only for large scale multiple user applications. It’s more than reasonable as a data store in local applications or at smaller scales where having the application and data layer on the same machine are acceptable.

If you’re at a point where the application needs to talk over a network to your database then that’s a reasonable heuristic that you should use a different DB. I personally wouldn’t trust my data to NFS.


What is a "local application"?

Funny how people used to ask "what is a cloud application", and now they ask "what is a local application" :-)

Local as in "desktop application on the local machine" where you are the sole user.


This, though I think other posters have pointed to a web app/site that’s backed by SQLite. It can be a perfectly reasonable approach, I think, as the application is the web server and it likely accesses SQLite on the same machine.

That commenter's idea clearly wasn't about desktop application on a local machine. That is why I asked.

You mean Andrew's comment? I took it in the broadest sense to try and give a more complete answer.

FWIW (and this is IMHO of course) DuckDB makes working with random JSON much nicer than SQLite, not least because I can extract JSON fields to dense columnar representations and do it in a deterministic, repeatable way.

The only thing I want out of DuckDB core at this point is support for overriding the columnar storage representation for certain structs. Right now, DuckDB decomposes structs into fields and stores each field in a column. I'd like to be able to say "no, please, pre-materialize this tuple subset and store this struct in an internal BLOB or something".


I would say SQLite when possible, PostgreSQL (incl. extensions) when necessary, DuckDB for local/hobbyist data analysis and BigQuery (often TB or PB range) for enterprise business intelligence.

For as much talk as I see about SQLite, are people actually using it or does it just have good marketers?

It's the standard for mobile. That said, in server-side enterprise computing, I know no one who uses it. I'm sure there are applications, but in this domain you'd need a good justification for not following standard patterns.

I have used DuckDB on an application server because it computes aggregations lightning fast which saved this app from needing caching, background services and all the invalidation and failure modes that come with those two.


> are people actually using it or does it just have good marketers?

_You_ are using it right this second. It's storing your browser's bookmarks (at a minimum, and possibly other browser-internal data).


Among people who can actually code (in contrast to just stitch together services), I see it used all around.

For someone who openly describes his stack and revenue, look up Pieter Levels, how he serves hundreds of thousands of users and makes millions of dollars per year, using SQLite as the storage layer.


Do you have a specific use case you're curious about? It's the most widely deployed database software of all time. https://sqlite.org/mostdeployed.html

If you use desktops, laptops, or mobile phones, there is a very good chance you have at least ten SQLite databases in your possession right now.

It is fantastic software, have you ever used it?

I don't have a use case for it. I've used it a tiny bit for mocking databases in memory, but because it's not fully Postgres, I've switched entirely to TestContainers.

I think the right pattern here is edge sharding of user data. Cloudflare makes this pretty easy with D1/Hyperdrive.

Man, I hope so. Bailing people out of horribly slow NoSQL databases is good business.

Sign-up pages are not Show HNs:

https://news.ycombinator.com/showhn.html


Why is Hacker News so interested in regular software jobs?

Isn't the idea of the site to "hack" as in thinking outside the box, building your own projects and companies, doing things in interesting new ways?


There is much innovation, hacking, etc. in "regular software jobs". Many companies that get launched are about improving efficiency or solving problems that these "regular software jobs" face. Once a startup grows, the product may continue to be interesting and new, but the day to day for the engineers building it begins to resemble a "regular software job".

Because people have families, mortgages, rent.

It's all fun and games until the bank takes back your house.

Also, while I love programming, I have zero interest in owning a business.


Agreed, although the issue is that it's damn difficult to find something real that is at the intersection of: (1) pleasing users, (2) making them pay, (3) actually being useful, (4) being possible to get off the ground without a multi-year investment in time or money, and (5) remaining profitable or even revenue-generating for at least say three years before competition or evolution gets to it. It's a lot easier to hack something nice when you don't have to sell it.

true!

I do believe in the 1 man SaaS legends. Any of us could build a little app overnight and watch it succeed.

I just don't have the sales/marketing muscle to my efforts


Do you have 1 of these 2 things?

1) No family to support 2) A set of assets to support not having income?


1) no 2) kind off but isn't that a really bad idea? specially knowing that the industry seems to be going down

That was back then when we were in our infancy as an industry and everything was about spitting out some cool graphics in less than 100 lines of JavaScript

Right now(specially looking at the world economy) It's all about getting yourself a nice, stable placement.

I haven't stopped hacking away, but I need an income


Because having a golden cushion on which to rest and create isn't something most people have.

There are different kinds of folks on HN. I, for one, just don't have any business ideas worth quitting a job for.

This is cool. I am currently using GitHub codespaces and I would love a version of it with nothing but a web based terminal. I don't need all the other windows they put around it. This might be it.

Trying my way around it now. Not sure what is going on:

    me: apt install apache
    the shell: exe.dev repl: command not found: "apt"
What is "exe.dev repl"? Am I not in a shell?

    me: bash
    the shell: exe.dev repl: command not found: "bash"
Damn, it seems the "shell" is not a Linux shell?


[exe.dev co-founder here] Hi there, I am not sure exactly where you are, but your VM is ubuntu derived and definitely starts with apt and bash. Perhaps try `ssh yourvm.exe.xyz`?

Thanks for trying it!


I can't use a native ssh client. I am using a browser. I clicked on "Shell" on top of the screen.

Oh, I think I found a real shell now! You have to click "VMs" then on the VM and then "Terminal".

Yay, this is great!


While at tailscale you built sketch.dev only to actually build this product ? Love it. Ultimate yak shave. Kind of how like Antithesis was the product inside foundationdb.


What you connect to first is the exe.dev jump server/management interface. You can ssh into your vm from there. Try typing help


Nmap 52.35.87.134 (exe.dev) Returns many open ports


Linux


Isn't Bluesky supposed to be decentralized?

How can some party lock you out of it?


Well Bluesky/the protocol behind it is decentralized

but the dm (direct message) functionality itself isn't decentralized and bluesky even mentions it/shows it that its unencrypted and centralized iirc


Hmm.. so the public channel is decentralized but the private channel is not.

There is actually a technical solution to that then. Use the public channel to send/receive private messages. Every could publish a public key. Then everyone could send private messages to everyone by encrypting them with the public key of the receiver and sending them over the public channel.

Shall we try it? My public key:

    -----BEGIN PUBLIC KEY-----
    MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAKs9CbOAxSROEdm/+QGyDLdxITTq+YdbmIlOM0jemqKvLXinnBUDeDRSGXOoCnygXLFsm6R31szySqiVunasX/8CAwEAAQ==
    -----END PUBLIC KEY-----
You can send me a private message by encrypting it here:

https://anycript.com/crypto/rsa

And then pasting the encrypted version into a reply to this comment :)


Although I enjoy the public key/private key ideas, If you wish to talk encrypted, one of the best ways to do such could be having signal if you don't mind centralization

But if you want decentralization some options i can recommend are matrix,simplex,session etc.

But to be honest, there is a good point that you raise about how to talk decentralized on bluesky

well, one of the ideas that I can think of right now, is that someone can use https://keyoxide.org/ and paste in their public key and also connect both bluesky and matrix and then have the keyoxide as part of something public like a comment

The problem in this is that it becomes tedious and does add more friction to the whole thing but definitely possible.


Paste my private keys into a form control on the web? What could go wrong?


Who doesn't review all the several megabytes of minified JavaScript for every page they visit?


he's asking you to paste his public key, not your private one.


If you choose to use a centralized frontend to access Bluesky (everyone does this) and that frontend has to follow laws because it's run by a corporation... that's what you get.


There is no way to access your DMs except using Bluesky's centralized backend server.


Since even after 2 hours nobody is discussing the actual font, let me tell you what comes to my mind when I read anything about Google and design:

They got phone design right.

I just can't get my head around it that even Apple, which is supposed to be THE design company, is making phones that can't lay on a table without wobbling like a barstool on a crooked floor. It just feels so broken to me. So detrimental to my sense of aesthetics.

Google phones tackled it with an elegant solution. Thanks for that. I wouldn't know what phone to use if Pixels didn't exist.


The irony is that you're still not discussing the font


That's his point. Since nobody else is, he's not going to either.


Apple probably has swathes of real-world usability data showing that virtually no-one uses their phone for prolonged periods of time while it's laying down on a hard flat surface.

You may be right about the aesthetics (and Lord Jobs may well have agreed with you) but they may have made the tradeoff consciously.


One can say "they probably had data to support it" about virtually any decision. It is not really a defense from critique. It may have been deliberate, but it still feels wrong and bad.


I think the point is it feels wrong and bad to benign number of people.


Hot take: a wobbly phone is much easier to pick up from the table.


Original iPhone got design right.

I don't think there's a single modern smartphone that I like. My latest favourite smartphone was iPhone 4S. No camera bump. Perfect size, fits well in my hand, operable with one thumb. Perfect display size, enough to present all information I need. Perfectly usable without ugly case.


> Perfectly usable without ugly case.

Why would you buy an ugly case and not a clean and well designed, functional one ?

If you liked the original iPhone design, getting a rounded and hand fitting case would be the go too IMHO (on the size difference, there's no way out at this point)


My previous phone was the iphone 8. It’s trully a world of difference in usability compared to the iPhone 13 I’m using now. I have big hands, so I can ise the latter one-handed, but a lot of people I’ve seen don’t.


Why not the excellent iPhone 13 Mini?


My only beef with that one was the slippery soap-bar edges if used without a case. Otherwise, yep, perfect size, disappears in pocket.


My favorite phone of all time (based on hand feel and appearance) was the OnePlus One. It had its software problems, but every phone I've held since then has been a disappointment in the hand.


What bugs me most is that Apple DID do this (I still hold that iPhone SE 1 is the goat) and then decided to drop it because it wasn't as profitable.


I've just got a new Samsung and it's wobbling too. I hate this. Why can't they at least put the cameras in the middle? Or maybe horizontally centred? Or they could just put another bumper on the other side to make it symmetrical. I'm looking for a cover to balance this out, but no luck so far.


iPhone 17 pro max is balanced with their standard case.


> I just can't get my head around it that even Apple, which is supposed to be THE design company, is making phones that can't lay on a table without wobbling like a barstool on a crooked floor. It just feels so broken to me. So detrimental to my sense of aesthetics.

Of all the controversial design choices, I think Apple got this one right.

I do not care if my phone wobbles when flat on the desk. I don’t use my phone like that. It’s in my hand if I’m using it.

I use my phone camera sparingly, but when I pull it out I want it to work very well. And it does. If it takes a little bump out to fit better optics then I don’t care in the slightest.

> Google phones tackled it with an elegant solution. Thanks for that. I wouldn't know what phone to use if Pixels didn't exist.

Making your entire phone choice revolve around the shape of the camera island is the oddest top priority I’ve heard yet, but I’m glad you found one that works for you.


Just so you know, HN in general does not appreciate Reddit rudeness. Your comment would have been fine if not for that last sentence.


Wasn't meant to be rude. More confused, because it's really a unique criteria to pick a phone by.

I think HN mostly doesn't appreciate any defense of Apple or other large companies. I really should stay away from any threads that turn into collections of complaints about big companies because the audience they draw is only interested in negative comments about the companies.


Some of these companies are now designing the phone on the assumption you're going to case it. No other reason to make a Pixel camera bump w/ scratch-vulnerable screen stick out so far.


If that is the case, then either the phone should come with a case or it should not be marketed as a complete product.

How about making the phone more durable by adding 1 mm to its thickness, so that a 50 gram, 4 mm thick case won't need to be added.


I used my case not so much to protect my device, although it definitely does, but so I can hold onto the damn thing.

Without a case it's like holding the last gasps of a bar of almost flat soap. Also keeps it from sliding off surfaces.


If a phone needs a case, then phones should be sold with a case included. I hate cases and have never put one on my phone--and have never had a phone break or crack.


I'd rather have a wobbly phone (how often do you push on your screen when it is flat on the table?) and a proper OS than a proper phone and a wobbly OS.

Gesture navigation on Android was introduced half a decade ago and it is still broken. In most apps my edge swipe to pull out a drawer or a swipe on the right side to 'forward' are still detected as back button swipes. Editing details at the edge of a photo often gets detected as a back button swipe. Ridiculous.


1°) Gesture navigation is entirely optional.

2°) Android followed UX/UI 101 about where to put frequently used buttons: where you can reach them with your thumb. Basic design, right ? Apple iOS: the close/back button is usually on the top left corner, unreachable by right-handed users that only constitutes 90% of people, number about the same in all countries and cultures. That's only one example, but that bag where it comes from is deep.

You should take a few steps back before displaying publicly polarizing opinions and maybe nuance your words a bit.


1) that’s like saying good UX is entirely optional - sure it is but users will still complain

2) disregarding another blatant discrimination of left-handed users: I switch a couple times per week between android and iOS devices for various reasons and the android UX is so janky and unintuitive it hurts - it might just be my particular device and it’s much better in other cases.

This might be extremely polarising but I agree with GP.


> 1°) Gesture navigation is entirely optional.

It is the default on all modern Android flavors and the overwhelming majority (>90%) of users sticks with defaults. It is likely Google is going to deprecate the navigation bar within a couple of Android versions.

> Apple iOS: the close/back button is usually on the top left corner, unreachable

You clearly never used iOS, because you just backswipe. You rarely if ever touch back buttons.

Not that I disagree although you're fighting the wrong fight. The big problem is controls being on the top instead of the bottom. Neither Apple nor Google has attempted to fix this, only Samsung partially has with OneUI. And they can't force developers to adhere to "content top, controls bottom". Ironically enough Apple had this fixed until iOS.. 12? From 7-12, the control center was at the bottom. All they had to was move the notification centre there and figure out a way to make it compatible with a gesture bar.

> right-handed users that only constitutes 90% of people

People tend to one-hand their phone with their non-dominant hand to keep their dominant hand usable.

> You should take a few steps back before displaying publicly polarizing opinions and maybe nuance your words a bit.

I use and develop for both platforms. You just sound like an angry, unknowledgeable fanboy.

Perhaps take heed to your own advice :+)

Edit: if you want an example of something that Android does way better: notification management via notification categories. I get to disable stupid promotional or "typing.." notification categories from an app, whilst maintaining functional ones. iOS should take a page from Android there.


The wobbling is the worst part of the hardware on my iPhone mini, annoys me probably more than fifty times per week.

Because I often unlock it when it is on the desk I also miss Touch ID a lot, because with Face ID I also have to lean forward every time for it to recognise me.


Too bad Pixel support for factory-broken screens sucks so my "well designed" Pixel has green vertical line in the middle of the screen. So detrimental to my sense of aesthetics.


I've come to realize that barely anyone I know uses swipe typing anymore, and that this is why using it laying flat is viable in the first place


    vulnerable to remote code execution from
    systems on the same network segment
Isn't almost every laptop these days autoconnecting to known network names like "Starbucks" etc, because the user used it once in the past?

That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.


If you run FreeBSD on your laptop you don't auto connect to public WiFi.

Joking, but not that much :)


Your wifi chip probably isn’t supported tbh.


This is the real joke.


FreeBSD 15 had a massive improvement with WiFi, however if you let your Computer auto-connect to a "unknown" Network...well that's not good.


My question was about known networks.

As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.


>So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.

Aka "unknown" or "public" Network....don't do that.


There is nothing wrong with using public networks. It's not 2010 anymore. Your operating system is expected to be fully secure[1] even when malicious actors are present in your local network.

[1] except availability, we still can't get it right in setups used by regular people.


Unless you run FreeBSD, apparently


You don't use public networks?

And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?


Why does it matter? I mean I guess it did in this case but that is considered a top priority bug and quickly fixed.

I guess my point is the way the internet works is that your traffic goes through a number of unknown and possibly hostile actors on it's way to the final destination. Having a hostile actor presenting a spoofed wifi access point should not affect your security stance in any way. Either the connection works and you have the access you wanted or it does not. If you used secure protocols they are just as secure and if you used insecure protocols they are just as insecure.

Now having said that I will contradict myself, we are used to having our first hop be a high security trusted domain and tend to be a little sloppy there even when it is not. but still in general it does not matter. A secure connection is still a secure connection.


WPA2-entreprise and WPA3 both have certificate chains checking exactly to avoid such attacks


Hmm. Are you sure that your stack wouldn't accept these discovery packets until after you've successfully authenticated (which is what those chains are for) ?

Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common sense - why not have a single network?

But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?


I may be missing something, but it is almost a guarantee that you would not receive a RA in this scenario? eduroam is using WPA2/WPA3 enterprise, so my understanding is that until you authenticate to the network you do not have L2 network access.

Additionally, eduroam uses certificate auth baked into the provisioning profile to ensure you are authenticating using your organizations IdP. (There are some interesting caveats to this statement that they discuss in https://datatracker.ietf.org/doc/html/rfc7593#section-7.1.1 and the mitigation is the usage of Private CAs for cert signing).


dozens of people will be affected


As someone using Linux to build web applications, I wonder what about the Apple ecosystem could make it worth to have such a Damocles’ sword hanging over me my whole life.

Am I missing something? My current perspective is that not only am I free of all the hassle that comes with building for a closed ecosystem, such as managing a developer account and using proprietary tools, it also comes with much harder distribution. I can put up a website with no wait time and everybody on planet earth can use it right away. So much nicer than having to go through all the hoops and limitations of an app store.

Honest question: Am I missing something? What would I get in return if I invested all the work to build for iOS or Mac?


Plenty of things do work better as a native application. Packaging is a pain across the board nowadays. Apple is pretty good, you pay a yearly fee if you want your executable signed and notorized, but they make it very hard to run without that (for the lay person). Windows can run apps without them being signed but it gives you hell and the signing process is awful and expensive. Linux can be a packaging nightmare.


What works better as a native application?


And that website is hosted somewhere, you’re using several layers of network providers, the registrar has control over your domain, the copper in the ground most likely has an easement controlling access to it so your internet provider literally can just cut off access to you whenever they want, if you publish your apps to a registry the registry controls your apps as well.

There are so many companies that control access to every part of your life. Your argument is meaningless because it applies to _everything_.

A trustless society is not one that anyone should want to be a part of. Regulations exist for a reason.


Not wanting centralization under one company does not equal advocating for "trustless society".

All the things you mentioned (registrars, ISPs, registries, etc) have multiple alternative providers you can choose from. Get cut off from GCP, move to AWS. Get banned in Germany, VPS in Sweden. Domain registration revoked, get another domain.

Lose your Apple ID, and you're locked out of the entire Apple ecosystem, permanently, period.

Even if a US federal court ordered that you could never again legally access the internet, that would only be valid within the US, and you could legally and freely access it by going to any other country.

So in fact, rather than everything being equivalent to Apple's singular control, almost nothing is equivalent (really, only another company with a similarly closed ecosystem).


If aws decided to block your access to their ecosystem you would lose so so so much more than Apple blocking your access to theirs. If the US decided what you said, t1 networks would restrict your access across much of the planet.

Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).

But of course those providers can also cut you off, so what I said still applies.


First off, AWS cutting off your AWS account does not block you from visiting other websites that use AWS, it just means you can't use AWS itself as a customer. Apple's ecosystem OTOH means that OP's issue with iCloud disabled their account globally across all Apple services, not just within iCloud itself (and in fact, to further illustrate the difference, losing access to your AWS console account doesn't cut off your account for Amazon.com shopping).

> Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).

The person above was asking about why they *as a developer* would want to risk their time and effort developing for iOS. Any work developing for iOS in e.g. switft or objective-c, is not portable for other platforms like Android. If they lose their Apple account, any time they spent developing for iOS-specific frameworks is totally wasted, is their point.

> If the US decided what you said, t1 networks would restrict your access across much of the planet.

No offense, but you have no clue what you're talking about. There are in fact court orders where internet access is restricted as part of criminal sentencing. Here's a quick example guide [1]. No part of that involves network providers cutting you off.

[1] https://www.uscourts.gov/about-federal-courts/probation-and-...

How on earth do you imagine a "t1 network" provider would determine that a person using their network from the UK is actually a person from the US with a court order against using their network? And to be clear, the court orders don't compel ISPs to restrict access, or attempt to enforce blocks like you are suggesting.


If you're full in Apple ecosystem, like my GF, you get:

- Shared clipboard across devices - Shared documents - Shared browser - Shared passwords - Free, quality office suite - Interoperable devices (use iPhone as camera on Mac, for example) - Payments across different devices (use clock to pay, for example, shared with your iPhone)

All of this with just one account without any third-party service.

And billion of things more, probably, I'm not a full Apple head.


Strange, I don't need any of that.

And when I hang out with people who ARE in Apple's ecosystem, to me it seems they struggle more to get things done than me.

Why would I want a shared clipboard across multiple devices?


> Why would I want a shared clipboard across multiple devices?

I guess you've never had to type something first on your laptop to paste in a phone app, or vice versa.

Or open a link from a phone messaging app in your laptop browser.


In the rare case (maybe once per month or so) where that happens, I start a script on my laptop that starts a webapp both the phone and the laptop can open in their browser and send text to each other.

The overhead of starting it and typing "laptop.tekmol" into the browser on both machines is only a few seconds.

That seems mich saner to me than to constantly have some interaction between the two devices going on.


> That seems mich saner

Normal people just message themselves on tg or wherever you send and receive messages.

Geeks use KDE Connect.

Whatt you do is weird.


What is the business model behind open source projects like bun? How can a company "aquire" it and why does it do that?

In the article they write about the early days

    We raised a $7 million seed round
Why do investors invest into people who build something that they give away for free?


The post mentions why - Bun eventually wanted to provide some sort of cloud-hosting saas product.


Everyone could offer a cloud-hosted saas product that involves bun, right?

Why invest into a company that has the additional burden of developing bun, why not in a company that does only the hosting?


The standard argument here is that the maintainers of the core technology are likely to do a better job of hosting it because they have deeper understanding of how it all works.

There's also the trick Deno has been trying, where they can use their control of the core open source project to build features that uniquely benefit their cloud hosting: https://til.simonwillison.net/deno/deno-kv#user-content-the-...


Hosting is a commodity. Runtimes are too. In this case, the strategy is to make a better runtime, attract developers, and eventually give them a super easy way to run their project in the cloud. Eg: bun deploy, which is a reserved no op command. I really like Buns DX.


Yep. This strategy can work, and it has also backfired before, like with Docker trying to monetize something they gave away for free.


Except Amazon would beat them to it


Free now isn't free forever. If something has inherent value then folks will be willing to pay for it.


Well, if they suddenly changed the license, we'd get a new Redis --> Valkey situation. Or even more recently, look at minio no longer maintaining their core open source project!


I mean if you're getting X number of users per day and you don't need to pay for bandwidth or anything, there's gotta be SOME way to monetize down the line.

If your userbase or the current CEO likes it or not.


Ads. Have you seen the dotenv JavaScript package?


And don't forget when Caddy was putting ads in your servers via a `Caddy-Sponsors` header.

(It was reverted after the situation gained visibility, as is tradition.)


Wow. I am unstarring Caddy. That's absolutely wild.


FWIW they do seem to regret it, but goes to show how very tempting it is for projects to go down the ads route once they have users.


Either for a modest return when it sells or as a tax write off when it fails.


VCs do not invest for a modest return.


No, but faced with either a loss or a modest return, they'll take the modest return (unless it's more beneficial to not come tax season). Unicorns are called unicorns for a reason.


The question was why do investors invest


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: