I don't understand this. This post is saying that ChatGPT is good and is a forerunner because it provides a nice Mac App.
In my opinion, not everything requires a native app. AI Chat assistants are completely fine to be used in the web browser. for most used applications like slack, I do have native application( even slack which is website in a shell is completely usable as a desktop application). What i really don't understand is the benefit of a ChatGPT native application other than native widgets instead of web elements.
Fairly subjective, but personally I find all apps being within the browser quite constrictive. I'd much rather have my apps unencumbered by browser chrome and unintended keystrokes, persisting their window size/position and all kind of other affordances. Definitely not a fan of the 'browser as the OS' philosophy, as it feels a bit inception.
That said, I'm less and less bothered by an app that's Electron under the hood, but I think that's more to do with the quality bar for native apps slipping over the past few cycles (macOS) and forfeiting their advantage.
Super important for Google as a search engine so they can filter out and downrank AI generated results. However I expect there are many models out there which don’t do this, that everyone could use instead. So in the end a “feature” like this makes me less likely to use their model because I don’t know how Google will end up treating my blog post if I decide to include an AI generated or AI edited image.
The EU didn't define any specific method of watermarking nor does it need to be tamper resistant. Even if they had specified it though, it's easy to remove watermarks like SynthID.
I have been curious about this myself. I tried a few basic stenography detection type tools to look for watermarks but didn’t find anything. Are you aware of any tools that do what you are suggesting?
> Today, we are putting a powerful verification tool directly in consumers’ hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.
Re-rolling a few times got it to mention trying SynthID, but as a false negative, assuming it actually did the check and isn't just bullshitting.
> No Digital Watermark Detected: I was unable to detect any digital watermarks (such as Google's SynthID) that would definitively label it as being generated by a specific AI tool.
This would be a lot simpler if they just exposed the detector directly, but apparently the future is coaxing an LLM into doing a tool call and then second guessing whether it actually ran the tool.
By anybody's AI using SynthID watermarking, not just Google's AI using SynthID watermarking (it looks like partnership is not open to just anyone though, you have to apply).
I agree. I use Bitwarden on my Samsung Android phone and also on my Linux desktop. Bitwarden currently supports passkeys on almost all the apps on my android including firefox. The same passkeys which i used to login on my phone can be used on my Linux desktop where i use Firefox with Bitwarden extension. What's now possible was not even possible at the start of this year. I haven't switched everything to passkeys but i can see it as an alternative to passwords now(passwords really shines in some areas too).
I read about Passkey comittee being against open source passkey managers during start of this year (can't reference it, sorry) but with open source password/key managers already supporting passkeys, i don't think it turned out to be true.
> I read about Passkey comittee being against open source passkey managers during start of this year (can't reference it, sorry) but with open source password/key managers already supporting passkeys, i don't think it turned out to be true.
Tim Cappalli is thoroughly misguided throughout that discussion, but he's not threatening anything. Okta lets users require attestation, but it will never, ever force attestation on anyone.
Tim's not threatening, but he is saying quite clearly that sites on the internet (Relying Parties) might just not accept Passkeys from KeePassXC:
> The unfortunate piece is that your product choices can have both positive and negative impacts on the ecosystem as a whole. I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
Tim's talking the reality of KeePassXC and the reality is that this specification is being built in a way where the user is fundamentally out of control. Where the industry at large has total control over your material, gets to say how you can store your keys, and will refuse you credential managers that they don't like.
The proposed Credential Exchange Protocol draft also does not allow you to backup your key. A credential manager will only Export the key to another credential manager service, across public endpoints on the internet. Never transiting the user's control. So you have to trust your credential manager that they actually will let you export your credentials, to someone you can trust, at a future point in time. There's an issue open for this, but no real hope this ever gets better. https://github.com/fido-alliance/credential-exchange-feedbac...
Passkeys seem designed to never be trustable by users. There's always some online service somewhere holding your materials that governments will be able to legally strongarm the service into getting access to. You won't be able to Export when you need it. The security people seem intent on making sure computers are totally controlled by corporations and governments, in the worst ways. The top post is right. https://news.ycombinator.com/item?id=45737608
Correct, individual sites could make that choice. They won't, but they could. (Love the mention in the linked comment of Netflix and Disney, two services that don't even support proper MFA.)
We're completely on the same side, to be clear. I just have zero fear of KeePassXC (which I sometimes use with Okta!) being blocked by anything consumer-facing.
To your edit: I suppose this is strictly true, but it's relevant that Apple's own devices satisfy the attested hardware requirement. These are the same devices you need to have a full-fledged Apple account in the first place. That's more Apple doing Apple things than anything to do with passkeys, but it is indeed an example of not being able to use KeyPassXC. Will there be more than epsilon cases like that? I still don't think so, for what seem like obvious market reasons.
The specific part that I consider a threat is "which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations".
Sorry, to clarify: Okta is not for our purposes a relying party and won't do anything to force attestation on relying parties. The second bit of what he wrote is ambiguous, but charitably, could simply mean "I used to argue against requiring attestation, but now I'm not sure". Which is fine, since he has absolutely no pull when it comes to how Okta's product works (and to be fair, I don't think he implied otherwise or even mentioned Okta).
But open-source programs can always be modified to do that, so that's a terrible reason to ban open-source passkey managers. And besides, you shouldn't be forbidden from doing things with your own data just because they're unwise.
The point of passkeys is to protect against phishing and password reuse. You can't protect against local compromise, even if your passkeys are stored in something like a YubiKey, because once you log in to your bank with your hardware-backed passkey, the malware on your computer could use the session you started to transfer all of your money out of your account.
That's not quite correct. This can easily be seen by simply considering that the people who developed the passkey standard are also developing a passkey import/export standard which is nearly done and implementations are appearing in the field already.
For example Apple's Passwords app on MacOS/iOS/iPadOS 26 now supports export and import of passkeys to/from other apps that support that standard. I don't know if any other apps have yet actually released such support.
Can you send some documentation on how? For example, I tried googling for transferring a passkey out of popular systems and it doesn't seem possible[1][2] other than through JSON export[3] which is what some sites want to block as I understand.
I don't think you're going to find it. The main vendors are hostile to this workflow. I get why, any flow that can exist to export passkeys can be used by hostile actors to walk a 75-year old millionaire grandma through handing over $$$. I think however that that's just a risk we have to make the bank and brokerages accept. It's not a problem with a technical solution.
Wasn't the discussion you responded to about how they currently can't be shared and that the vendors don't want them to be shared as it breaks their desired lock-in?
So the same passkey is being used on multiple devices, rather than different devices (actually applications) having distinct passkeys.
Doesn't that defeat one of the centrals aims of passkeys? In what ways is your setup different than random passwords in bitwarden - what's the additional security?
Other than that they shouldn't have a big advantage for a more professional user with unique, long, and random passwords. For the common user it should be a great upgrade, giving all these advantages with better UX.
The password manager has become the device (and offers some assurance if the device is lost, as you can log into the manager on another device). I agree definitely isn't the original vision of passkeys (having a different passkey on every device, stored in separate password databases?), but it makes more sense for my cases.
NATS is very good. It's important to distinguish between core NATS and Jetstream, however.
Core NATS is an ephemeral message broker. Clients tell the server what subjects they want messages about, producers publish. NATS handles the routing. If nobody is listening, messages go nowhere. It's very nice for situations where lots of clients come and go. It's not reliable; it sheds messages when consumers get slow. No durability, so when a consumer disconnects, it will miss messages sent in its absence. But this means it's very lightweight. Subjects are just wildcard paths, so you can have billions of them, which means RPC is trivial: Send out a message and tell the receiver to post a reply to a randomly generated subject, then listen to that subject for the answer.
NATS organizes brokers into clusters, and clusters can form hub/spoke topologies where messages are routed between clusters by interest, so it's very scalable; if your cluster doesn't scale to the number of consumers, you can add another cluster that consumes the first cluster, and now you have two hubs/spokes. In short, NATS is a great "message router". You can build all sorts of semantics on top of it: RPC, cache invalidation channels, "actor" style processes, traditional pub/sub, logging, the sky is the limit.
Jetstream is a different technology that is built on NATS. With Jetstream, you can create streams, which are ordered sequences of messages. A stream is durable and can have settings like maximum retention by age and size. Streams are replicated, with each stream being a Raft group. Consumers follow from a position. In many ways it's like Kafka and Redpanda, but "on steroids", superficially similar but just a lot richer.
For example, Kafka is very strict about the topic being a sequence of messages that must be consumed exactly sequentially. If the client wants to subscribe to a subset of events, it must either filter client-side, or you have some intermediary that filters and writes to a topic that the consumer then consumes. With NATS, you can ask the server to filter.
Unlike Kafka, you can also nack messages; the server keeps track of what consumers have seen. Nacking means you lose ordering, as the nacked messages come back later. Jetstream also supports a Kafka-like strictly ordered mode. Unlike Kafka, clients can choose the routing behaviour, including worker style routing and deterministic partitioning.
Unlike Kafka's rigid networking model (consumers are assigned partitions and they consume the topic and that's it), as with NATS, you can set up complex topologies where streams get gatewayed and replicated. For example, you can streams in multiple regions, with replication, so that consumers only need to connect to the local region's hub.
While NATS/Jetstream has a lot of flexibility, I feel like they've compromised a bit on performance and scalability. Jetstream clusters don't scale to many servers (they recommend max 3, I think) and large numbers of consumers can make the server run really hot. I would also say that they made a mistake adopting nacking into the consuming model. The big simplification Kafka makes is that topics are strictly sequential, both for producing and consuming. This keeps the server simpler and forces the client to deal with unprocessable messages. Jetstream doesn't allow durable consumers to be strictly ordered; what the SDK calls an "ordered consumer" is just an ephemeral consumer. Furthermore, ephemeral consumers don't really exist. Every consumer will create server-side state. In our testing, we found that having more than a few thousand consumers is a really bad idea. (The newest SDK now offers a "direct fetch" API where you can consume a stream by position without registering a server-side consumer, but I've not yet tried it.)
Lastly, the mechanics of the server replication and connectivity is rather mysterious, and it's hard to understand when something goes wrong. And with all the different concepts — leaf nodes, leaf clusters, replicas, mirrors, clusters, gateways, accounts, domains, and so on — it's not easy to understand the best way to design a topology. The Kafka network model, by comparison, is very simple and straightforward, even if it's a lot less flexible. With Kafka, you can still build hub/spoke topologies yourself by reading from topics and writing to other topics, and while it's something you need to set up yourself, it's less magical, and easier to control and understand.
Where I work, we have used NATS extensively with great success. We also adopted Jetstream for some applications, but we've soured on it a bit, for the above reasons, and now use Redpanda (which is Kafka-compatible) instead. I still think JS is a great fit for certain types of apps, but I would definitely evaluate the requirements carefully first. Jetstream is different enough that it's definitely not just a "better Kafka".
We're particularly interested in NATS' feature of working with individual messages and have been bitten by Kafka's "either process the entire batch or put it back for later processing", which doesn't work for our needs.
Interested if Redpanda is doing better than either.
Redpanda is fantastic, but it has the exact same message semantics as Kafka. They don't even have their own client; you connect using the Kafka protocol. Very happy with it, but it does have the same "whole batch or nothing" approach.
NATS/Jetstream is amazing if it fits your use case and you don't need extreme scalability. As I said before, it offers a lot more flexibility. You can process a stream sequentially but also nack messages, so you get the best of both worlds. It has deduping (new messages for the same subject will mark older ones as deleted) and lots of other convenience goodies.
Thank you so much again. Yes, we are not Google scale, our main priority is durability and scalability but only up to a (I'd say fairly modest) point. I.e. be able to have one beefy NATS server do it all and only add a second one when things start getting bad. Even 3 servers we'd see as a strategic defeat + we have data but again, very far from Google scale.
We've looked at Redis streams but me and a few others are skeptical as Redis is not known for good durability practices (talking about the past; I've no idea if they pivoted well in the last years) and sadly none of us has any experience with MQTT -- though we heard tons of praise on that one.
But our story is: some tens of terabytes of data, no more than a few tens of millions of events / messages a day, aggressive folding of data in multiple relational DBs, and a very dynamic and DB-heavy UI (I will soon finish my Elixir<=>Rust SQLite3 wrapper so we're likely going to start sharding the DB-intensive customer data to separate SQLite3 databases and I'm looking forward to spearheading this effort; off-topic). For our needs NATS Jetstream sounds like the exactly perfect fit, though time will tell.
I still have the nagging feeling of missing out on still having not tried MQTT though...
At that scale, Jetstream should work well. In my experience, Jetstream's main performance weakness is the per-stream/consumer overhead: Too many and NATS ends up running too hot due to all the state updates and Raft traffic. (Each stream is a Raft group, but so is each consumer.)
If its tens of TB in a stream, then I've not personally stored that much data in a stream, but I don't see why it wouldn't handle it. Note that Jetstream has a maximum message size of 1MB (this is because Jetstream uses NATS for its client/server protocol, which has that limit), which was a real problem for one use case I had. Redpanda has essentially no upper limit.
Note that number of NATS servers isn't the same as the replication factor. You can have 3 servers and a replication factor of 2 if you want, which allows more flexibility. Both consumers and streams have their own replication factors.
The other option I have considered in the past is EMQX, which is a clustered MQTT system written in Erlang. It looks nice, but I've never used it in production, and it's one of those projects that nobody seems to be talking about, at least not in my part of the industry.
Well I work mainly with Elixir in the last 10-ish years (with a lot of Rust and some Golang here and there) so EMQX would likely be right up my alley.
Do you have any other recommendations? The time is right for us and I'll soon start evaluating. I only have NATS Jestream and MQTT on my radar so far.
Kafka I already used and rejected for the reasons above ("entire batch or nothing / later").
As for data, I meant tens of terabytes of traffic on busy days, sorry. Most of the time it's a few hundred gigs. (Our area is prone to spikes and the business hours matter a lot.) And again, that's total traffic. I don't think we'd have more than 10-30GB stored in our queue system, ever. Our background workers aggressively work through the backlog and chew data into manageable (and much smaller chunks) 24/7.
And as one of the seniors I am extremely vigilant of payload sizes. I had to settle on JSON for now but I push back, hard, on any and all extra data; anything and everything that can be loaded from DB or even caches is delegated as such with various IDs -- this also helps us with e.g. background jobs that are no longer relevant as certain entity's state moved too far forward due to user interaction and the enriching job no longer needs to run; when you have only references in your message payload, this enables and even forces the background job to load data exactly at the time of its run and not assume a potentially outdated state.
Anyhow, I got chatty. :)
Thank you. If you have other recommendations, I am willing to sacrifice a little weekend time to give them a cursory research. Again, utmost priority is 100% durability (as much as that is even possible of course) and mega ultra speed is not of the essence. We'll never have even 100 consumers per stream; I haven't ever seen more than 30 in our OTel tool dashboard.
EDIT: I should also say that our app does not have huge internal traffic; it's a lot (Python wrappers around AI / OCR / others is one group of examples) but not huge. As such, our priorities for a message queue are just "be super reliable and be able to handle an okay beating and never lose stuff" really. It's not like in e.g. finance where you might have dozens of Kafka clusters and workers that hand off data from one Kafka queue to another with a ton of processing in the meantime. We are very far from that.
Jetstream is written in Go and the Go SDK is very mature, and has all the support one needs to create streams and consumers; never used it from Elixir, though. EMQX's Go support looks less good (though since it's MQTT you can use any MQTT client).
Regarding data reliability, I've never lost production data with Jetstream. But I've had some odd behaviour locally where everything has just disappeared suddenly. I would be seriously anxious if I had TBs of stream data I couldn't afford to lose, and no way to regenerate it easily. It's possible to set up a consumer that backs up everything to (say) cloud storage, just in case. You can use Benthos to set up such a pipeline. I think I'd be less anxious with Kafka or Redpanda because of their reputation in being very solid.
Going back to the "whole batch or nothing", I do see this as a good thing myself. It means you are always processing in exact order. If you have to reject something, the "right" approach is an explicit dead-letter topic — you can still consume that one from the same consumer. But it makes the handling very explicit. With Jetstream, you do have an ordered stream, but the broker also tracks acid/nacks, which adds complexity. You get nacks even if you never do it manually; all messages have a configurable ack deadline, and if your consumer is too slow, the message will be automatically bounced. (The ack delay also means if a client crashes, the message will sit in the broker for up to the ack delay before it gets delivered to another consumer.)
But of course, this is super convenient, too. You can write simpler clients, and the complicated stuff is handled by the broker. But having written a lot of these pipelines, my philosophy these days is that — at least for "this must not be allowed to fail" processing, I prefer something that is explicit and simpler and less magical, even if it's a bit less convenient to write code for it. Just my 2 cents!
This is getting a bit long. Please do reach out (my email is in my profile) if you want to chat more!
> Jetstream clusters don't scale to many servers (they recommend max 3, I think)
You can have clusters with many servers in them, 3 is actually the minimum required if you want fault-tolerance, that's how you scale JetStream horizontally: you let the stream (which can be replicated 1, 3 or 5 times) spread themselves over the servers in the cluster.
JetStream consumers create state on the server (or servers if they are replicated) and are either durable with a well known name or 'auto-cleanup after idle time' (ephemeral), and indeed allow you to ack/nack each message individually rather than just having an offset.
However, that is with the exception of 'ordered consumers', which are really the closest equivalent to Kafka consumers in that the state is kept in the client library rather than on the servers. They deliver the message in order to the client code, they take care of re-deliveries and recovering from things like getting disconnected from a server, and only ever deliver messages in order, no need to explicitly ack the messages, and if you want to persist your offset (which is the sequence number of the last message the ordered consumer delivered) just like the consumer group clients in Kafka persist their offset in a stream, you would persist your offset in a NATS KV bucket.
And indeed you can now even go further and use batched direct gets to get very good read speed from the stream and no extra server state in the server besides an entry in the offset KV, performance of the batched direct gets is very high and can match the ordered consumer's speed. Besides no incurring no server state, another advantage of stateless consuming is that all the servers replicating the stream will be used to process direct get requests not just the currently elected leader (don't forget to enable direct gets for the stream, it's not on by default). So you can scale the read throughput horizontally by increasing the number of replicas.
The mechanics of replication: streams and stateful consumers can be replicated using 1, 3 or 5 servers. Servers connect directly together to form a cluster and jetstream assets (streams/consumers) are spread out over the servers in the cluster. Clusters can be connected together for form super-clusters. Super-cluster means that access to JetStream assets is transparent: streams/consumers located in one cluster can be accessed from any other cluster. You can have streams that mirror or source from other streams, those mirrors could be located in other clusters to offer faster local access. You can easily move on the fly JS assets from one cluster to another. Leaf nodes are independent servers (which can be clustered) that connect to a cluster like a client would. Being independent means they have their own security for their own clients to connect to them, they can have their own JS domain and you can source to/from streams between the leaf node's domain and the hub (super-cluster). Leaf nodes can be daisy chained.
Sorry, what I meant that each stream (which forms a Raft group) doesn't scale to more. I thought it was 3, but thanks for the correction.
Everything else you wrote confirms what I wrote, no? As for batch direct gets, that's great, but I'm not sure why you didn't go all the way and offered a Kafka-type consumer API that is strictly ordered and persists the offset natively. I've indeed written an application that uses ordered consumers and persists the offset, but it is cumbersome.
Every time I've used Jetstream, what I've actually wanted was the Kafka model: Fetch a batch, process the batch, commit the batch. Having to ack individual messages and worry about AckWait timeouts is contrary to that model. It's a great programming model for core NATS, but for streams I think you guys made a design mistake there. A stream shouldn't act like pub/sub. I also suspect (but can't prove) that this leads to worse performance and higher load on the cluster, because every message has to go through the ack/nack roundtrips.
I'd also like to point out that Jetstream's maximum message size of 1MB is a showstopper. Yes, you can write big messages somewhere else and reference them. But that's more work and more complexity. With Kafka/Redpanda, huge messages just work, and are not particularly a technical liability.
> Sorry, what I meant that each stream (which forms a Raft group) doesn't scale to more. I thought it was 3, but thanks for the correction.
Streams can have more than 3 replicas. Technically they can have any number of replicas but you only get extra HA when it's an odd number (e.g. 6 replicas doesn't offer more HA than 5, but 7 does). Typically the way people scale to more than one stream when a single stream becomes a bottleneck is by using subject transformations to insert a partition number in the subject and then creating a stream per partition.
Point taken about wanting to have the 'ordered consumer + persist the offset in a KV' built-in, though it should really not be cumbersome to write. Maybe that could be added to orbit.go (and we definitely welcome well written contributions BTW :)).
> Having to ack individual messages and worry about AckWait timeouts is contrary to that model
Acking/nacking individual messages is the price to pay for being able to have proper queuing functionality on top of streams (without forcing people to have create partitions), including automated re-delivery of messages and one-to-many message consumption flow control.
However it is not mandatory: you can set any ack policy you want on a consumer: ackAll is somewhat like committing an offset in Kafka (it acks the sequence number and all prior sequence numbers), or you can simply use ackNone meaning you forgo completely message acknowledgements (but it will still remember the last sequence number delivered (i.e. the offset) automatically).
For example using a pull consumer with ack policy=none and doing 'fetch' to get batches of messages is exactly what you describe what you want to do (and functionally not different from using an ordered consumer and persisting the offset).
And yes, having acks turned on or off on a consumer does have a significant performance impact: nothing comes for free and explicit individual message acking is a very high quality of service.
As for the max message size you can easily increase that in a server setting. Technically you can set it up all the way to 32 MB if you want to use JetStream and up to 64MB if you want to just use Core NATS. However many would advise you to not increase it over 8 or 16 MB because the large the message are the more the potential for things like latency spikes (think 'head of the line blocking') increased memory management, increased slow consumers, etc...
I got really pissed off with their field CTO for essentially trying to pull the wool over my eyes regarding performance and reliability.
Essentially their base product (NATs) has a lot of performance but trades it off for reliability. So they add Jetstream to NATs to get reliability, but use the performance numbers of pure NATs.
I got burned by MongoDB for doing this to me, I won’t work with any technology that is marketed in such a disingenuous way again.
Yes, I meant Jetstream (I even typed it but second guessed myself, my mistake) I’m typing these when I get a moment as I’m at a wedding- so I apologise.
The issue in the docs was that there are no available Jetstream numbers, so I talked over a video call to the field CTO, who cited the base NATs numbers to me, and when I pressed him on if it was with Jetstream he said that it was without: so I asked for them with Jetstream enabled and he cited the same numbers back to me. Even when I pressed him again that “you just said those numbers are without Jetstream” he said that it was not an issue.
So, I got a bit miffed after the call ended, we spent about 45 minutes on the call and this was the main reason to have the call in the first place so I am a bit bent about it. Maybe its better now, this was a year ago.
This doesn’t really support your position as far as most readers are concerned - it sounds like a disconnect. If they didn’t do this in any ad copy or public docs it’s not really in Mongo territory.
I’m telling you why I am skeptical of any tech that intentionally obfuscates trade-offs, I’m not making a comparison on which of these is worse; and I don’t really care if people take my anecdote seriously either: because they should make their own conclusions.
However it might help people go in to a topic about performance and reliability from a more informed position.
I don't doubt your experience. But I think it might have been more just that guy, than NATS in general.
The other day i was listening to a podcast with their ceo from maybe 6 months ago, and he talked quite openly about how jetstream and consumers add considerable drag compared to normal pubsub. And, more generally, how users unexpectedly use and abuse nats, and how they've been able to improve things as a result.
As the person in question I feel compelled to answer to this: first of all my apologies if I managed to piss you off, certainly didn't mean to!
It looks like you got frustrated by my refusing to give figures of performance for JetStream: I always say in meetings that because there are too many factors that affect greatly JetStream performance (especially compared to Core NATS which mostly just depends on the network I/O) I can not just give any number as that would likely not accurately reflect (better or worse!) the number that you would actually see in your own usage. And that rather you should use the built-in `nats bench` tool to measure the performance for yourself for your kind of traffic requirements and usage patterns, in your target deployment environment and HA requirements.
On top of that, the performance of the software itself is still evolving as we release new versions that improve things and introduce new features (e.g. JetStream publication batches, batched direct gets) that greatly improve some performance numbers.
I assure you that I just don't want to give anyone some number and then you try it for yourself and you can't match those numbers, nothing more! We literally want you to measure the performance for youself rather than to give you some large number. And that's also why the docs don't have any JetStream performance numbers. There is no attempt at any kind of disingenuity, marketing, or pulling wool over anyone's eyes.
And I would never ever claim that JetStream yields the same performance numbers as Core NATS, that's impossible! JetStream does a lot more and involves a lot more I/O than Core NATS.
However, if I get pressed for numbers in a meeting: I do know the orders of magnitude that NATS and JS operate at, and I will even be willing to say with some confidence that Core NATS performance numbers are pretty much always going to be up to the 'millions of messages per second'. But I will remain very resistant to making any claim any specific JS performance numbers because in the end the answer are 'it depends' and 'how long is a piece of string' and you can scale JetStream throughput horizontally using more streams just like you can scale Kafka's throughput by using more partitions.
Now in some meetings some people don't like that non-answer and really want to hear some kind of performance number so I normally turn the question and ask them what their target message rates and sizes are going to be. If their answer is in the 'few thousands of messages per second' (like it is in your case if I'm not mistaken about the call in question) then, as I do know that JetStream typically comfortably provides performance well in excess of that, I can say with confidence that _at those kinds message rates_ it doesn't matter whether you use Core NATS or JetStream: JetStream is plenty fast enough. That's all I mean!
And I would add, as soon as you are using more than one stream (e.g. do sharding using Core NATS subject transformation) because JetStream throughput scales horizontally, just like Kafka throughput scale horizontally as you add more partitions and more servers in the cluster I feel reasonably confident to say that _in most cases_ it doesn't really matter what the target number of messages per second is, as you can create a cluster large enough to provide that aggregated throughput. In properly distributed systems, the answer to the benchmark number question truly is 'how long is a piece of string'.
The delivery guarantees section alone doesn’t make me trust it. You can do at least once or at most once with kafka. Exactly once is mostly a lie, it depends on the downstream system: unless going back to the same system, the best you can do is at least once with idempotancy
I want to know how the TSMC-manufactured Tensor processors compares to Samsung-manufactured Tensor processors and also how TSMC-manufactured Tensor processors compare to TSMC-manufactured Snapdragon processors. Samsung's Tensors (also Exynos) had the fame of getting superhot. I want to know if these problems persist in new Tensor chips.
The HTML spec is actually constantly evolving. New features like the dialog element [0] and popover [1] were added every year. But removing something from the spec is very rare, if it ever happened before.
The W3C spec was. But WHATWG and HTML5 represent a coup by the dominant browser corporations (read: Google). The biggest browser dictates the "living standard" and the W3C is forced into a descriptivist role.
The W3C's plan was for HTML4 to be replaced by XHTML. What we commonly call HTML5 is the WHATWG "HTML Living Standard."
They weren't sidelined because they had bad ideas (XHTML 2.0 had a lot of great ideas, many of which HTML5 eventually "borrowed"), they were sidelined because they still saw the web as primarily a document platform and Google especially was trying to push it as a larger application platform. It wasn't a battle between the ivory tower and practical concerns, it was a proxy battle in the general war between the web as a place optimized to link between meaningful, accessibility-first documents and the web as a place to host generalized applications with accessibility often an afterthought. (ARIA is great, but ARIA can only do so much, not as much of it by default/a pit of success as XHTML 2.0 once hoped to be.)
it will. It will make old non-updated pages break with same fate as old outdated pages which used MathML in the past and were not updated with polyfills.
Who else is watching this who grew up watching this same movie play out with Microsoft/IE as the villain and Google as the hero? (Anyone want to make the "live long enough" quote?)
someone has to pay for the servers at the end. are you asking for openai to subsidize ChatGPT Pro for low-income countries? Since OpenAI is for-profit entity focussed on profits, I don't think it might be a wise idea financially for OpenAI to do so.
It might be better for OpenAI, but this disparity only increases immigration. If the US is serious about keeping immigrants out, they should subsidize access to AI.
Until the time when Microsoft realises this and creates a privileged API just for Microsoft Recall so that It can see the screen.
Better switch to Linux. It's not perfect but I am sure that you will be fine using Linux(Unless you want to use Adobe Suite or Few Corporate applications which won't be used by many)
As mentioned at the bottom, there's another API, which is to flag the window as containing DRM'd content. Although I suppose there's not really anything to stop AI vendors doing copyright infringement if they want.
DRM'd content was never implemented out of fear of "copyright infringement" it was built to solidify corporate connections. Microsoft implements DRM mechanisms to incentivize copyright holders to provide better service on their platforms.
AI doesn't have less respect for Copyright than any other tech company. AI has less need for the corporate connections to those copyright holders.
This of course comes from the neoliberal philosophy where the only remedy you have is to withdraw service. We've gutted the actual rights of actual creatives.
> We were partly inspired by Signal’s blocking of Recall. Given that Windows doesn’t let non-browser apps granularly disable Recall, Signal cleverly uses the DRM flag on their app to disable all screenshots. This breaks Recall, but unfortunately also breaks the ability to take any screenshots, including by legitimate accessibility software like screen-readers. Brave’s approach does not have this limitation since we’re able to granularly disable just Recall; regular screenshotting will still work. While it’s heartening that Microsoft recognizes that Web browsers are especially privacy-sensitive applications, we hope they offer the same granular ability to turn off Recall to all privacy-minded application developers.
Well for those who are stuck with Windows because of some applications or simply because of familiarity. My suggestion is to stay on an older release as long as possible. If that isn't possible I would recommend keeping the computer turned off most of the time and only briefly using it for your purpose and try to keep from doing anything embarrassing or personal too much. Another thing that might help is calling up your local government and ask them to do something about this. You can also call Windows customer service up and let them know that you are displeased about what they've done and will not be recommending it to your friends as a result.
With windows 10 lifetime coming to an end, even though the Enterprise edition still going to be supported for some while, eventually the world will move on to Windows 11.
Unless someone breaks that cycle of Windows being the dominant OS.
Are you sure your computer really doesn't have TPM? Because Intel CPUs since Haswell and AMD CPUs since Zen 1 have firmware-level TPM (implemented at the Intel ME / AMD PSP side) built in, but disabled by default, but you can mostly turn it on in BIOS/UEFI setup interface (if the BIOS supports it), and Windows 11 will work with it. And sometimes even discrete TPMs on motherboards come disabled by default.
If you haven't already, check your BIOS for TPM/fTPM settings (or if you're on Intel also look for "Intel Platform Trust Technology" or "Intel PTT").
It's multiplayer games with anti-cheat that are the ones not supported (with developers now having to go out of their way to turn OFF support for Linux); everything else works fine. If you're only into singleplayer (like me), games often run better on Linux.
To expand on this and provide some examples, I've recently played Wuthering Waves, Tower of Fantasy, The First Descendant, Phantasy Star Online 2, Black Desert, Lost Ark, Throne & Liberty, probably others I'm forgetting, all of which contain anti-cheat of some variety and all on Linux.
There are some that don't support Linux and likely never will like Valorant or Call of Duty, and even fewer that dropped Linux support like Apex Legends.
Mostly mobile ported gacha and Korean MMOs. It's good that it runs on Linux for you, but most people don't play these games. Most people are interested exactly in the ones you've listed as not supported.
Familiarity is not really a good reason against Linux, however. Just install a Linux distro that comes close in looks. What are these Linux distributions these days? Pop OS? Elementary OS? Most people are only using their browsers anyway.
Linux Mint remains the most stable, least babysitting required, solid distro for beginners. Ubuntu is also okay. Pop, Zorin, Elementary, etc. are great choices, too. But if you ask me one, I will suggest Linux Mint. All Linux Mint releases are Long Time Support (LTS) versions, btw. With support for five years.
Pop is woefully out of date at this point due to the ongoing alpha development of COSMIC. I switched off because a whole bunch of Nvidia-related things started breaking. LTS doesn't seem ideal for Nvidia in my experience.
The latest NVIDIA drivers (576+ I think?) are totally broken on Ubuntu 22.04 variants and seem to require 24+. That was my experience anyway, I tried everything I could to get it to work, but PopOS would never boot under those drivers unless I upgraded to the alpha builds on Ubuntu 24. Forced me to switch to Fedora in the end (I needed those drivers for work) which worked seamlessly.
Ubuntu releases LTS versions every two years. I jump from LTS to LTS by simple `do-release-upgrade` command. Takes about 30 minutes. And I only upgrade after the dust settles, i.e. after 3-4 months of the release.
Mint also releases upgrades regularly. I suggest upgrading regularly.
Familiarity is shorthand for time and energy. Neither of them are infinite.
Ironically your second sentence is an example of the impact on time and energy the switch will have: someone who just decided to switch from windows to linux will have to take the time and spend the energy to chose between the dozen of linux distributions before any practical consideration.
If you do all the work, they indeed don't need to spend time and energy on the switch itself, and even better if their usage is limited enough they don't encounter missing software or incompatibilities with the windows world.
The irony is that before doing the work for the switch, and even before doing the work of checking if the switch is feasible for their need, they will need to spend time and energy to select which linux distrib they should choose. Switching from linux to windows of macos doesn't have this issue
So the problem with switching to Linux is that they have to spend the time and energy to choose a Linux distribution? If we are to nitpick, it is not that easy with Windows either. Which Windows version? Which torrent is the right one? The last question is because most people here do not have a legit copy, they torrent it. It took me longer to find the right version of Windows to torrent than to search for "top Linux distributions for beginners".
> So the problem with switching to Linux is that they have to spend the time and energy to choose a Linux distribution?
You know that's not the only issue, and that it is not what I'm saying. You can try to convince yourself as much as you want that it is not an issue, the reality will not change
> Which Windows version?
Are you kidding ? there is only one in 2025 : windows 11.
Then what are the other issues? Because you repeatedly cited that as the reason it makes it ironic.
I told you why I am talking about torrent. No one has a legit copy of Windows 11, no one actually buys it here, especially not individuals. Companies might.
Oh yeah, how would they know Windows 11 is the latest if not by looking it up?
Either way, as I said, someone will be asked to install an OS, or they will buy computers with an OS pre-installed (and they will eventually ask, even then). None of which require them to pick anything.
Most people use Windows for little more than running the web browser today. I've literally switched dozens of people over to Ubuntu variants (actually Kubuntu) over the years, and it's only getting easier and easier as everyone moves everything to the web browser.
Bitwig works on Linux, but the problem I had was that my pro-audio soundcard [1] didn’t have supported drivers and I couldn’t get the open source drivers to work. I tried switching to a Dante based solution: none of the Dante based apps worked, so I tried AES67 (open source Dante), still no joy — I just could not get my Dante/AES67 AD/DA converters (which attach to everything in my studio) to be ‘seen’ on Linux.
So after weeks trying to get a high-channel count I/O solution working, I gave in, I found the best thing to do was to just get a M4 Mac Mini for my audio/studio work. And leave Linux for everything else. I was setup within an hour on macOS.
There’s unfortunately still too much resistance and it can cost $1000s trying to get to a working solution or ultimately in my case: a non-working solution. It cost me about $6000 trying various options — not all wasted, but still, not cheap to find out that nothing works.
For me, there's a small handful of games that keep me from using Linux on my main PC - RuneScape and League of Legends. RuneScape has some nasty bugs where the GPU isn't detected when running under Wayland, but if you run the game via Proton-thru-Steam you don't have access to all your accounts (I occasionally play on two). League of Legends just straight up doesn't work at all after they added their rootkit-as-anticheat, but it's the only way I keep in touch with some friends.
I was able to run Ableton on Linux once, but it was finnicky and didn't give me the confidence if I ever had to do a performance live with it. Unfortunately there are fields that couldn't care less about software freedom and ownership and Microsoft abuse this for profit.
I'm an old guy who has run Linux off and on from the very beginning. Every so often I attempt to replace my windows laptop with Linux, and it always turns into days of dinking around with configuration hacks and installing this and that and the other thing trying to get all my software to work. There always some roadblock that prevents migrating completely. Eventually I always end up going back to Windows. I wish it weren't so.
I think part of the problem is I have decades invested in proprietary software in the Windows ecosystem. If I didn't have that investment it would likely be easier. Don't make my mistake! :)
This is a sunk cost that in a few years will have been mitigated. You already understand the value proposition for Linux, otherwise you wouldn't have attempted this transition multiple times. So I think it's a matter of getting used to it.
But I also think context matters. Maybe you also need work that motivates you to use Linux or is impossible or quite inconvenient to do on Windows.
In any case, using Windows is fine. I don't think the user is to blame for the shortcomings of the brand. It's like with conscious consuming products that don't harm the environment. It's important to seek those, but if you have to go out of your way it's just not gonna happen.
The reliable way to do this (I found) is check the kernel source tree first. The supported pro sound cards are, typically, kind of old. Because FOSS developers aren't just gifted hardware and documentation to write the drivers so they're a generation or two behind.
Counterintuitively; using the latest kernel can be more stable as bug fixes are merged.
RME does have a few supported cards (I use one) but they're mostly the ADAT ones. And the driver is in-tree.
> The reliable way to do this (I found) is check the kernel source tree first
Sure, in general that's good advice, but it becomes more complicated depending on the solution/situation...
I’d bought the RME card long before I was hoping to make it work with Linux. I'd been running Windows for a long time for work reasons, so I had my dev work and my music setup on the same computer (a 64 core Threadripper machine, with 128gb of RAM, and fast NVMe drives). A few months before, I'd sold my company, so for work at least I didn't need to be on Windows any more. Then I started getting random audio dropouts! Presumably because of all the crap Microsoft keep loading onto the OS with after every update.
The audio dropouts was the straw that broke the camel’s back. If a machine like that, with nothing else running on it other than my DAW, could start having audio dropouts, then you know something has gone horribly wrong with the OS.
That's why I wanted to get my existing RME card working on Linux. When I wasn’t able to use it, I then assumed I’d be able to get a network based protocol running (Dante/AES67). There was plenty of discussion about it online, it seemed viable, and it's a network, Linux can do networking! Also, I kinda like the idea of network based audio, I think it's likely to be more future proof.
So, I replaced one of my Ferrofish A32 Pro interfaces with a Ferrofish A32 Pro DANTE ($4300) [1]. It supports both Dante and AES67. I figure if I can’t get Dante running then the open protocol AES67 (with support in Linux-land) should work. That didn’t feel that risky. But no amount of finagling would make the interface appear via the virtual sound-card/router concept.
This had already taken weeks (maybe months) to not get anywhere, so I looked for a Class Compliant sound-card (or, one that definitely had Linux drivers) that could support the number of channels I needed (96 channels in and out), it also needed to support the AD/DA interfaces interfaces I already had (so connectivity via MADI or Dante/AES67), but there just wasn't anything. The only other sound-card out there was another RME interface.
So, that’s when I opted for a Class Compliant sound-card [2] for casual use on Linux ($324) and a new RME Digiface Dante sound-card ($1543) [3] that I could use with a newly purchased M4 Mac Mini ($3000). I also needed to replace another one of my Ferrofish A32 Pro interfaces with another Ferrofish A32 Pro Dante ($4300) to make the setup work.
I realise now that my earlier estimate of $6000 was wildly out, it cost $13467 to leave Windows and to get an alternative pro-audio setup working. There may well have been alternative approaches and I may well have missed a possible solution that could have either worked with the original RME card (which would cost nothing) or AES67 (that would still require me to replace 3 x Ferrofish A32 Pro interfaces, so would end up about the same cost), but I felt like I'd been pretty thorough.
I guess the reason I'm writing War & Peace here is that it's often not possible to know ahead of time whether any one setup might work. Drivers is one thing, but a pro-studio setup has more moving parts, and so if you don't know ahead of time whether any one setup will work, then it can be an expensive process to walk through the different options. And that's a problem that neither Windows nor Mac has. It's a real shame, because the stability of Linux should make it the best platform for pro-audio.
I rather think it cost you $mac_mini to buy a mac mini, and $compulsion buying hardware for reasons I still do not understand.
Paul Davis has lurked here at least as long as you have, and it would have cost $0 just to ask if that card is currently supported in Linux.
I mean, for $13467 I bet I could buy a plane ticket to Shenzhen, hire a translator, and have them send an email to Collabora to quote a price to develop the firware/driver I can afford with the money I have left over.
> Paul Davis has lurked here at least as long as you have, and it would have cost $0 just to ask if that card is currently supported in Linux
I didn’t need to ask him, I already owned it, I just needed to dual boot Linux to find out, it cost me $0.
You don’t seem to understand that a soundcard needs connecting to everything else in a studio, so there’s no such thing as just changing one thing and it not having a knock on effect (unless you’re really lucky, which I wasn’t).
You also don’t seem to know that once a setup is right, it can last a decade or more, so getting the right combo of gear to minimise friction in a studio is worth it over time, even if it is expensive upfront.
If it makes you feel a little less morally superior, I sold the original soundcard and the two replaced AD/DAs for ~$7200.
And, you still miss the point of the story completely: the point was that it’s too risky for anyone considering building a pro setup on Linux. Especially compared to Windows and macOS where everything is plug and play.
That doesn’t mean there aren’t pockets of success in Linux-land, but that it can be costly in money and time to get it right, and it might never work for the setup you have.
It is risk.
> for reasons I still do not understand.
That’s obvious.
But the reasons are:
* I wanted to move away from Windows because it was unstable and pissing me off
* I already owned a high-end PCI RME card that connected to three Ferrofish A32 Pro converters
* If I could install Linux and have the RME card work then I wouldn’t need to change my studio setup
* There’s no official or stable driver
* So, a change to the setup was required
* To try and future proof the setup I looked to modern protocols like Dante and AES67 as they are taking over pro studios and are much more flexible — I also thought there was a reasonable chance it would work on Linux
* I couldn’t get it working on Linux
* Time is not infinite
* Therefore I bought a Mac for audio
* To avoid the expense of a Mac Pro I had to switch from a PCI based soundcard to a USB based soundcard (which I could plug in to the Mac Mini)
* I still use Linux on the original machine (for dev work), but with a class compliant soundcard for casual use. It’s relatively trouble free, other than half of my usb ports don’t work, but you know, meh
* I haven’t needed to use Windows since. So I consider it a win.
Linux audio is definitely hit and miss. Even with the most standard soundcard in existence (Scarlett) I still had problems with it. After fiddling a bit it works okay-ish, but there were definitely moments of "screw it, I'm buying a Mac".
The Scarlett should be USB class-compliant. I've got a Motu 2i2o DAC that I've been using on Linux for 3 years now, and before that used a Behringer U-Phoria without issues.
It is class-compliant - but to this day, I've never figured out, how to use the multitrack out with Pulse. The widget shows all the outputs and testing individual outs works, but Reaper duplicates the sends, so the outputs overlap. Works with JACK though, but JACK is just strange. Also no control software (aside from one open-source thingy that looks awful), so any change requires reboot to Windows or VM passthrough.
The reason I have a need for so much IO is that nearly all of my processing goes through external hardware (EQs, compressors, delays, reverbs, filters, phasers, chorus, summing mixer, multi-FX units like Eventide H3000 & H8000) and I have a wall of modular gear + about 20 synths and drum machines.
CAD software options are severely lacking as well. There's an unofficial snap package for Fusion360 but it's hit or miss depending on the distro, the day of the week, the weather, and whether Oracle's stock price is a prime number.
Their horrible xDesign thing is browser based. Regular Solidworks is still a Windows application. Onshape is a very good alternative to Fusion assuming it has the features you need.
Oh, I won't, but MS has the unfortunate recurring habit of turning features on against users' will, at best giving the option to "Remind me in 3 days".
My "favorite" tactless Windows update story in recent memory was when an update pinned a Copilot link to my taskbar. I unpinned it, then a few weeks later another update added the Copilot link back to my taskbar, but not as a pinned app. Rather it replaced the god damn "show desktop" button in the bottom right of the screen! They replaced an always on-screen OS navigation button that's been there since Windows 7 with an ad!!
I hope to god that Valve takes the opportunity they have with Steam OS to give us a potential real alternative to Windows that focuses on gaming support. Cause that's literally the only reason I'm forced to continue using this Microsoft adware slop of an OS.
> They replaced an always on-screen OS navigation button that's been there since Windows 7 with an ad!!
That must be doing wonders for the click rate. I can see the pre-promotion powerpoint slide now: "User engagement with Copilot is showing exponential growth"
There is currently no policy setting to do this. The available policy settings are "disable Recall and do not allow users to enable it" (which is the default) and "allow users to enable Recall, but leave it disabled by default".
Even if processing and storage is local, it is just too damn easy to abuse the feature from remote.
Imagine how useful it would be for software vendors (Microsoft included): "We have implemented new feature X, how are our users interacting with it? Let's ask their Recall AI about it".
This could essentially become telemetry on steroids.
In the start telemetry was seen as outrageously user-hostile spying, too. Look where we are now. We are all frogs, at least Microsoft is banking on it.
Never post anything like this on HN, you will get a torrent of people A) trying to help, but not being very helpful B) telling you that you’re dumb C) telling you that you’re holding it wrong and that you need different software/hardware/preferences/etc.
I have had tons of grief with NVIDIA cards that work stellar on Windows and the answer I always get talking to Linux folks is “LOL NVIDIA? You’re an idiot for buying NVIDIA.”
My friends who daily drive Linux have accepted that I’m particularly cursed. Either that or they privately think I’m a moron. Regardless none of them seem to be able to explain my issues or help.
Linux still isn't really ready for normal people who have other things to do.
Arguably if it's within your budget and you just want your computer to work, buy a Mac.
I make music and I don't want to fiddle with external drives so I'm basically stuck on Windows.
My biggest issue with Macs is not being able to replace the SSD. Eventually all SSDs must fail. Might not be in 2 years, might be in 6 or 7, but at that point the entire laptop is useless.
"It just works" was always a marketing slogan for Apple, not an actual reality. It's certainly not appropriate for an OS where perfectly standard mice have frustrating, decades-old limitations like scroll direction being tied to the touchpad settings and forward/backyard buttons not working.
If you stay in the Apple Box of supported use cases it's fine.
The customer support story is also much better. Instead of dealing with IRC channels and Reddit post trying to figure out why the latest kernel ruined everything, you go to the Apple Store.
I literally run Tumbleweed on my second laptop, I like Linux.
It's just not for people who don't want to invest time into understanding how computers work.
This whole thread is about how Linux is difficult because you need to understand what hardware is actually supported and you're arguing that MacOS is different because you still need to understand what hardware is supported, but the apple store will sell you something else with a smile.
I'm not even denying that MacOS is a perfectly acceptable OS, I just don't understand your argument.
But Apple's argument is it's not on them if a 3rd party mouse they happen to sell has issues.
With Linux you need to do significantly more work to get setup and every now and then a kernel update can ruin your day.
I want something Ubuntu stable that actually supports newer hardware, but that's just not where Linux is at.
The Linux community is amazing, but they lack the capacity to QA every possible laptop on the market.
Just a few days ago the sound on my Tumbleweed install decided to stop working. I thought about reinstalling, Chat GPT suggested I just accept audio not working and using a USB sound card.
Eventually, thinking as a last ditch effort, I asked Chat GPT how to completely reinstall the audio stack.
This went and removed my KDE desktop for some reason. Cool, I'll install Xfce from the tty.
I then installed Budgie since it's a bit easier to use.
All this because for some reason my sound didn't feel like working.
We, the types of people who visit this site, enjoy the process.
Not everyone does.
Macs definitely have issues too.
But you can go to the Apple store and have them figure it out.
Plus a routine update probably won't stop audio from working.
This was true for me until ~2014. I haven't had substantial hardware compatibility issues on Ubuntu since then. Sure, a few google searches for the right nvidia driver, but otherwise I've found Ubuntu to just work for many years.
It's a lottery that largely depends on what exact hardware you have.
I have one machine that I can't even install Linux on because no Linux installer or live CD will even boot on it. No idea why, and I don't want to spend a lot of time and effort figuring that out given that it's my dedicated gaming box, a "PC console" basically.
OTOH I have a laptop that I specifically purchased to run Linux on it. Which it does, and all devices work just fine. The only catch is that battery life when browsing is about 20-30% less, and, as far as I can tell, this is entirely due to Linux browsers disabling video hardware acceleration by default on most configs. If I enable it, things get much better for the battery, but at the cost of an occasional browser crash.
I might have one of the very rare cases where my laptop works much better with Linux than with Windows (both 10 and 11).
Both Wifi and Bluetooth doesn't work on a fresh Windows install, I have to physically connect a USB DVD player to install the drivers from the DVD that came with the package (in 2024! btw). On Linux everything just works out-of-the-box. Okay maybe not everything, I did have to patch my kernel for bluetooth drivers, but other than that it's a LOT smoother in every way than on Windows.
Maybe it's been fixed, but I brought this on release last year, it never worked right with Linux.
Hours upon hours of trying to fix it for naught.
I actually prefer Linux as a daily driver, I have the Ultra Core V2 version of the same laptop and rolling releases are generally fine for 3 to 6 months. At which point I just reinstall , while leaving Windows intact.
I guess if you want to buy a slightly older laptop or at least one with a slightly older CPU things are fine.
Refurbished Thinkpads excel particularly well here.
5k screens over DP are two screens in a trenchcoat. I saw this abstraction broken with menubar (only on half the screen), blue light filter (have to program both harlves), screen placement (have to drag both halves into place). FreeDesktop Gitlab has been tracking this since 2017 but IME only 2023 Ubuntu got it perfect, IIRC KDE got it perfect later.
In my opinion, not everything requires a native app. AI Chat assistants are completely fine to be used in the web browser. for most used applications like slack, I do have native application( even slack which is website in a shell is completely usable as a desktop application). What i really don't understand is the benefit of a ChatGPT native application other than native widgets instead of web elements.