Hacker Newsnew | past | comments | ask | show | jobs | submit | yablak's commentslogin

See also Push Go: https://chromewebstore.google.com/detail/push-go-for-pushbul...

... works with Pushbullet apps.


Yes, but it can be more of a pain keeping track of pairs. In production though, this is what's done. And given a fault, the debug binary can be found in a database and used to gdb the issue given the core. You do have to limit certain online optimizations in order to have useful tracebacks.

This also requires careful tracking of prod builds and their symbol files... A kind of symbol db.


FAANGs we're deeply involved in designing LTO. See, e.g.,

https://research.google/pubs/thinlto-scalable-and-incrementa...

And other refs.

And yet...


Google also uses identical code folding. It's a pretty silly idea that a shop that big doesn't know about the compiler flags.

Google is made of many thousands of individuals. Some experts will be aware of all those, some won't. In my team, many didn't know about those details as they were handled by other builds teams for specific products or entire domains at once.

But since each product in some different domains had to actively enable those optimizations for themselves, they were occasionally forgotten, and I found a few in the app I worked for (but not directly on).


ICF seems like a good one to keep in the box of flags people don't know about because like everything in life it's a tradeoff and keeping that one problematic artifact under 2GiB is pretty much the only non-debatable use case for it.

> We would like to keep our small code-model. What other strategies can we pursue?

Move all the hot BBs near each other, right?

Facebook's solution: https://github.com/llvm/llvm-project/blob/main/bolt%2FREADME...

Google's:

https://lists.llvm.org/pipermail/llvm-dev/2019-September/135...


but for x86_64, as of right now, if only a single call needs more than 31bits you have to upgrade the whole code section to large code model.

BOLT AFAIU is more about cache locality of putting hot code near each other and not really breaking the 2GiB barrier.


Why? Can't the linker or post-link optimizer reduce all near calls, leaving the more complicated mov with immediate form only where required?

Once the compiler has generated a 32-bit relative jump with an R_X86_64_PLT32 relocation, it’s too late. (A bit surprising for it to be a PLT relocation, but it does make some sense upon reflection, and the linker turns it into a direct call if you’re statically linking.) I think only RISC-V was brave enough to allow potentially size-changing linker relaxation, and incidentally they screwed it up (the bug tracker says “too late to change”, which brings me great sadness given we’re talking about a new platform).

On x86-64 it would probably be easier to point the relative call to a synthesized trampoline that does a 64-bit one, but it seems nobody has bothered thus far. You have to admit that sounds pretty painful.


iiic the model assumes no flow control, only select.

It does. But maybe someone should prove (in Lean?) that the lack of flow control is sufficient.

Without a constraint that values aren’t ignored, the lack of flow control is certainly not sufficient, so trying to do this right would require defining (in Lean!) what an expression is.


Happy user of kagi for several years. This is the opposite of my experience. Your comment strikes me as dishonest.

I am also very happy with Kagi's search result and suspect that someone is just trolling.

Hope tailscale adopts this

What's the "quantum measurement problem"? And why is it a problem? I get the wave function collapses when you measure bit. But which part of this do you want to resolve in a testable way?


It’s the question of how the wave function collapses during a measurement. What exactly constitutes a “measurement”? Does the collapse happen instantaneously? Is it a real physical phenomenon or a mathematical trick?


I thought that what constitutes a measurement is well understood; it's just the entanglement between the experiment and the observer, and the process is called decoherence - and the collapse itself is a probabilistic process as a result.

AFAIK an EoT is not required to design experiments to determine if it's a real physical phenomenon vs. a mathematical trick; people are trying to think up those experiments now (at least for hidden variable models of QM).




It literally gives you google results (+ additional search providers, usually not in top results)... without the added spam. It's therefore superior to "peak google results".

What are you talking about LLM services? default search behavior does not use any LLMs (except any Google might use to reorder their top 10 results internally).


You can set up the terminal+tmux to forward all the important keys to console emacs. I like iterm2 for its extreme configurability. Others say kitty terminal with the kitty term emacs library is great. I never got that working with tmux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: