Yes, but it can be more of a pain keeping track of pairs. In production though, this is what's done. And given a fault, the debug binary can be found in a database and used to gdb the issue given the core. You do have to limit certain online optimizations in order to have useful tracebacks.
This also requires careful tracking of prod builds and their symbol files... A kind of symbol db.
Google is made of many thousands of individuals. Some experts will be aware of all those, some won't. In my team, many didn't know about those details as they were handled by other builds teams for specific products or entire domains at once.
But since each product in some different domains had to actively enable those optimizations for themselves, they were occasionally forgotten, and I found a few in the app I worked for (but not directly on).
ICF seems like a good one to keep in the box of flags people don't know about because like everything in life it's a tradeoff and keeping that one problematic artifact under 2GiB is pretty much the only non-debatable use case for it.
Once the compiler has generated a 32-bit relative jump with an R_X86_64_PLT32 relocation, it’s too late. (A bit surprising for it to be a PLT relocation, but it does make some sense upon reflection, and the linker turns it into a direct call if you’re statically linking.) I think only RISC-V was brave enough to allow potentially size-changing linker relaxation, and incidentally they screwed it up (the bug tracker says “too late to change”, which brings me great sadness given we’re talking about a new platform).
On x86-64 it would probably be easier to point the relative call to a synthesized trampoline that does a 64-bit one, but it seems nobody has bothered thus far. You have to admit that sounds pretty painful.
It does. But maybe someone should prove (in Lean?) that the lack of flow control is sufficient.
Without a constraint that values aren’t ignored, the lack of flow control is certainly not sufficient, so trying to do this right would require defining (in Lean!) what an expression is.
What's the "quantum measurement problem"? And why is it a problem? I get the wave function collapses when you measure bit. But which part of this do you want to resolve in a testable way?
It’s the question of how the wave function collapses during a measurement. What exactly constitutes a “measurement”? Does the collapse happen instantaneously? Is it a real physical phenomenon or a mathematical trick?
I thought that what constitutes a measurement is well understood; it's just the entanglement between the experiment and the observer, and the process is called decoherence - and the collapse itself is a probabilistic process as a result.
AFAIK an EoT is not required to design experiments to determine if it's a real physical phenomenon vs. a mathematical trick; people are trying to think up those experiments now (at least for hidden variable models of QM).
It literally gives you google results (+ additional search providers, usually not in top results)... without the added spam. It's therefore superior to "peak google results".
What are you talking about LLM services? default search behavior does not use any LLMs (except any Google might use to reorder their top 10 results internally).
You can set up the terminal+tmux to forward all the important keys to console emacs. I like iterm2 for its extreme configurability. Others say kitty terminal with the kitty term emacs library is great. I never got that working with tmux.
... works with Pushbullet apps.
reply