The most useful feature with the worst UX. You have to type about:profiles and then create a new profile. But imagin you now want to move old profiles to a new computer and FF happens to run in a Flatpack. Yeah, much fun
You can (now?) create profiles from the account icon in the toolbar [1] and at least on my firefox install, you can also do it from the hamburger menu.
I use firefox via flatpak and had no issues so far accessing profile data (in one of the folders in ~/.var/app/org.mozilla.firefox/.mozilla/firefox/ - I keep a regular archive of the entire folder as backup).
I'm on 144.0.2 on MacOS and I do have it. Under the hamburger menu in the upper right and near the top of the list. Never set up a profile on this machine before, so maybe that could be related?
I found out about profiles recently and I just couldn't believe that that's the standard way to access them. Also, it's not obvious which profile you are currently on, so there are some silly but necessary workarounds like having a dummy bookmark with the profile name on each or something like that, while it could just be a string next to the address bar.
It works really well though. Does exactly what I would expect and hope from such a feature.
Huh, I had no idea the <profile> argument to -P is optional (--help does not say), I was always using --ProfileManager instead. Nice quality of life improvement, thanks for the information!
That used to be a start menu entry in the old days. I had heard it was removed but to my surprise -P works on my current linux. I'll have to see if it does actually start a new profile.
Use -p in your shortcut to firefox and it will show the profile manager on launch, from that you can easily create a new profile or open a new window on existing one.
You could have an LLM generate the SDDL description [0] for you, or even have it write a C++ or Python tokenizer. If compression succeeds, then it is guaranteed to round trip, as the LLM-generated logic lives only on the compression side, and the decompressor is agnostic to it.
It could be a problem that is well-suited to machine learning, as there is a clear objective function: Did compression succeed, and if so what is the compressed size.
The charts in the "Results With OpenZL" section compare against all levels of zstd, xz, and zlib.
On highly structured data where OpenZL is able to understand the format, it blows Zstandard and Xz out of the water. However, not all data fits this bill.
If I'd have to guess then I would think that Ceph is the only one who is truly open source and does not feature gate important parts to paid enterprise users.
I did go through this couple of years ago and we ended up with Ceph as well. Combine this with reusing existing hardware that was very suboptimal for Ceph in several ways, it was a pretty bad experience and in the end for our use case AWS was able to offer a good enough pricing that the performance and reliability of S3 was a better deal than managing it ourselves.
If I would do it again then I would make sure that I have the hardware setup that is ideal (plenty of SSD's for metadata, every spinning disk directly addressed as a single OSD, sound network topology and fast enough NIC's) and probably use Rook instead of cephadm. The monitoring, configuration and documentation side of Ceph is however still quite sad, it was really hard to figure out why something is slow and how to tune things faster.
That said, if the Enterprise options are performing better or you at least get good support for tuning and optimizing then the alternatives could be well worth consideration.
For me this opens the question of are there any good remote desktop solutions for multiuser systems around. Rustdesk is single user, TurboVNC works, but there can be lag.