I've heard this claim repeated a lot, in the case of soy "very poor" just doesn't seem supported by the data and more importantly in a real world setting one particular protein source lacking a specific amino acid doesn't matter as much because it is mostly not consumed in isolation.
But non-animal proteins bio-accumulate less harmfull stuff (like lead) and contain more useful minerals. I hate doing the "the truth is in the middle" guy, but here, the correct diet is clearly in the middle, no?
i agree that plant proteins usually contain more beneficial minerals than meat, but that also certainly includes lead. whole plants and especially plant-based protein products contain lots of lead, but it's unclear if this is a huge problem
Fish, lean beef, chicken, eggs, kefir, milk, cheese, rice, potatoes, EVOO, fruit and vegetables is all you need for peak athletic performance and optimal hormonal profile.
Kefir is amazing! My breakfast is now a Kefir shake with a half a ripe banana (those two work together), handful of frozen quality strawberries or blueberries, scoop of no-sugar added peanut butter and a pinch of salt.
Extra Virgin Olive Oil... a mono-unsaturated fatty acid blend that's one of the healthier minimally processed oils. Not great for medium to high heat cooking. Avocado oil has a similar nutritional profile and can tolerate a bit higher heat. If you are doing anything resembling frying or higher heat cooking you're likely better off with a more saturated fat option, tallow/lard.
> But rather quickly, after moving and resizing browser windows, the GPU process dies with messages like the following and, for example, WebGL is no longer hardware accelerated:
Is this specific to the WM he used or does HW acceleration straight up not work in browsers under Wayland? That to me seems like a complete deal breaker.
It is normal for KDE. KDE is mockingly called KrashDE in Linux circles for a reason. We're only 4 days into 2026 and there's already dozens of crash-related bugs filled in the bug tracker: https://bugs.kde.org/buglist.cgi?bug_status=UNCONFIRMED&bug_...
Because distros usually ship only one specific version of a library. And different distros ship different versions of libraries. If you develop your software on Arch Linux targeting a specific version of an API of the library you're using, and another developer tries to build the same software on Debian, and another on Fedora, it's basically a gamble if your software is going to build or not. With vcpkg, you can pin libraries to their specific versions, to ensure that your project builds regardless of the environment.
Then when distros go to actually package your software for users it'll break. I'm not sure moving the pain downstream is worse, but I'm also not sure it's better.
That has nothing to do with it. While both relate to KDE, we are talking about two very different things. You are talking about release channels, we are talking about development headers.
The entire point of LLM-assisted development is to audit the code generated by AI and to further instruct it to either improve it or instruct it to fix the shortcomings - kind of being a senior dev doing a code review on your colleague's merge request. In fact, as developers, we usually read code more than we write it, which is also why you should prefer simple and verbose code over clever code in large codebases. This seems like it would be instead aimed at pure vibecoded slop.
> Do you debug JVM bytecode? V8's internals?
People do debug assembly generated by compilers to look for miscompilations, missed optimization opportunities, and comparison between different approaches.
This describes where we are at now. I don't think it's the entire point. I think the point is to get it writing code at a level and quantity it just becomes better and more efficient to let it do its thing and handle problems discovered at runtime.
Intellisense + Intellicode + Roslynator (extension) combined were really the height of productivity in Visual Studio. Now they've driven a steam-roller over all of that, forced CoPilot down our throats.
I LIKE CoPilot's "chat" interface, and agents are fine too (although Claude in VS Code is tons better), but CoPilot auto-complete is negative value and shouldn't be used.
Huh I'm the opposite. I find the copilot chat slow and low value compared to ChatGPT. But I use the tab autocomplete a lot.
Otoh I disabled all the intellisense stuff so I don't have the issues described in TFA: tab is always copilot autocomplete for whatever it shows in grey.
I hate the time unpredictability of it. Intellij also has AI completion suggestions, and sometimes they're really useful. But sometimes when I expect them, they don't come. Or they briefly flash and then disappear.
What would be nice is if you could ask for a suggestion with one key, so it's there when I want it, and not when I don't. That would put me in control. Instead I feel subjected to these completely random whims of the AI.
FPS is a poor metric anyway, things like this should be measured in frame time instead - but either are meaningless numbers without knowing the hardware it runs on.
Sure, I usually measure performance of methods like these in terms of FLOP/s; getting 50-65% of theoretical peak FLOP/s for any given CPU or GPU hardware is close to ideal.
Yes but the primary issue was that 4chan was using over a decade old version of the library that contained a vulnerability first disclosed in 2012: https://nvd.nist.gov/vuln/detail/CVE-2012-4405
No benchmarks. No FLOPs. No comparison to commodity hardware. I hate the cloud servers. "9 is faster than 8 which is faster than 7 which is faster than 6, ..., which is faster than 1, which has unknown performance".
How: You've ran the test on a bunch of hosts and create a spec from ranges.
Why: you might be concerned with network connectivity (you don't get to choose which data center you launch in and it might not be exactly equal), noisy neighbors on shared hosts, etc. if you're measuring for networking, you probably are spinning ups separate accounts/using a bank of accounts and something in every az until you find what you're looking for.
I’ve had terrible luck benchmarking EC2. Measurements are too noisy to be repeatable. The same instance of the wrong type can swing by double digit percentages when tested twice an hour apart.
Non-animal protein sources (like soy and beans) have very poor bioavailability.
reply