Nope, call recording. Not sure how universal this is, but phone call recording immediately stops with the "This call is no longer being recorded" effect afterwards.
Even then. I'll take a leaky iOS 18 over pretty much any leaky Android or internet-connected TV or whatever.
iPhones are still the least bad option, for regular people who aren't planning to solder anything, select their boot loader on launch, or recompile a kernel.
You are claiming that based on information you don't have (the future). At least you could call it a prediction rather than state it as an obvious disfact.
Which is a bargain compared to what DRAM costs today. If you just include the bare minimum of DRAM for a successful boot and immediately set up the entire "small" Optane drive as swap, that's a viable workstation-class system for comparative peanuts. You can't do this with NAND because the write workload of swap kills the media (I suppose it becomes viable if you monitor SMART wearout indicators and heavily overprovision the storage to leverage the drive's pSLC mode, but you're still treating $~0.10/GB hardware as a consumable and that will cost you) and of course you can't do it with spinning rust because the media is too slow.
Can confirm doing so is awesome. Get some slightly bigger ones and partition them for additional use as zil. They're extremely satisfying to use, and depressing to remember that we'll never see their like again.
Sure! This is more or less how I'm using Optane in my storage box:
Two of U.2x4 to PCIe x16 riser cards, one loaded with 960GB Intel-branded Optanes, one with 1.5TB IBM-branded. PCIe bifurcation is set up in the BIOS to let them all come up properly, where they just show as regular NVMe. Riser cards like this can easily be substituted for PCIe to SAS/Oculink to U.2 cables, if that would be more accommodating to your chassis.
Once they all come up, partition them for your preferred split of swap and ZFS special. Swap should have them all mounted with the same priority and discard=pages, I also recommend setting up zswap (not zram swap) with lz4 as an additional layer of fast, evictable memory pool, as well as `vm.overcommit_memory=2` and `vm.swappiness=150`. This will effectively give you really good memory tiering for workloads and file cache.
When adding the other partitions to ZFS, use `-ashift=12 special mirror dev dev special mirror dev dev ...`. ZFS special covers all metadata, the intent log (sorta write cache), and optionally small files. I like to set it up so <= 8k small files get sent there, but you can probably go higher depending on how much capacity you allocate. My ~24T of allocated data ended up being ~150GB special with 8k small file, and that's with the whole pool configured with deduplication and blake3 for all hashes. Blake3 is fast as heck, but has very long hashes, so from a metadata standpoint, I'm using the most expensive option. I mitigate that a bit my setting metadata redundancy to `some`, since my metadata is effectively RAID10 anyway.
With some extra NVMe/Optane allocated to regular ZFS read cache, and all my spinning-rust data VDEVs also as RAID10, it's almost like having the whole array in memory, or at least on fast flash. Eliminating metadata from your drives seeking and letting them be written nearly instantly with Optane does wonderful things for spinning rust :)
Isn't that actually crazy good, even insane value for the performance and DWPD you get with Optane, especially with DRAM being ~$15/GB or so? I don't think ~$1/GB NAND is anywhere that good on durability, even if the raw performance is quite possibly higher.
Do you mean screen recording? What are the symptoms of the bug?
reply