Hacker Newsnew | past | comments | ask | show | jobs | submit | xylophile's commentslogin

Not a popular opinion, but RedHat (now IBM) funds an enormous amount of critical open source. They pay people to contribute to hundreds of upstream projects. And RHEL is 100% focused on stability. Sounds like a good match for your priorities / goals.


Any optimizations discovered at runtime by a JIT can also be applied to precompiled code. The precompiled code is then not spending runtime cycles looking for patterns, or only doing so in the minimally necessary way. So for projects which are maximally sensitive to performance, native will always be capable of outperforming JIT.

It's then just a matter of how your team values runtime performance vs other considerations such as workflow, binary portability, etc. Virtually all projects have an acceptable range of these competing values, which is where JIT shines, in giving you almost all of the performance with much better dev economics.


I think you can capture that constraint as "anything that requires finely deterministic high performance is out of reach of JIT-compiled outputs".

Obviously JITting means you'll have a compiler executing sometimes along with the program which implies a runtime by construction, and some notion of warmup to get to a steady state.

Where I think there's probably untapped opportunity is in identifying these meta-stable situations in program execution. My expectation is that there are execution "modes" that cluster together more finely than static typing would allow you to infer. This would apply to runtimes like wasm too - where the modes of execution would be characterized by the actual clusters of numeric values flowing to different code locations and influencing different code-paths to pick different control flows.

You're right that on the balance of things, trying to say.. allocate registers at runtime will necessarily allow for less optimization scope than doing it prior.

But, if you can be clever enough to identify, at runtime, preferred code-paths with higher resolution than what (generic) PGO allows (because now you can respond to temporal changes in those code-path profiles), then you can actually eliminate entire codepaths from the compiler's consideration. That tends to greatly affect the register pressure (for the better).

It might be interesting just to profile some wasm executions of common programs. If there are transient clusterings of control flow paths that manifest during execution. It'd be a fun exercise...


It is accomplishing something. When 40 different applicants are equally able to do the job, the only selector you have is "culture fit", which is where bias starts to easily kick in (race, age, whatever), and that is a legal risk.

The leetcode hoops exist to provide a provably objective measure for hiring, even though that measure is unrelated to job performance. It's purely a lawsuit avoidance mechanism.


I came to this thread purely to see if I was the only enlightened one.

Stalwart is perfect for small self-hosters: a single binary, a single-directory resilient datastore (by default), a UI for every setting, and defaults that guide you to a DNS config which maximizes your sender score. Plus support for all of the "power user" features such as ManageSieve and shared CalDav folders.

Honestly, I love hosting my email now. And the last remaining battery which could possibly be included is now WIP: webmail!

Unix philosophy need not apply when there is exactly one use case for integrating these tools. (Or at least, one case which covers 99% of users. The remainder can keep their managerie of arcane config formats and susceptibility to unsafe language CVEs.)


I'd like to read this but my eyeballs are burning. And since it's only a handful of pages, there's no reason for it to be a PDF.


The executive class is entirely based on personal branding. If you're not changing anything, it's like being a TikTok influencer without posting any new videos. It doesn't matter what you post really, and often the more controversial you are the better. If you play your cards right, you're not an "idiot" for making the company worse, you're a "bold and innovative thought leader".

You often see the same thing from ambitious managers. Aka, "managers gonna manage".

The other part of the equation is pure politics and PR, which at least does provide some real value to the company (if only temporarily, and at long-term net negative). Amazon made it pretty clear that their RTO was all about maintaining their relationship with politicians.


Docker daemon runs as root, and runs continuously.

If you're running rootless Podman containers then the Podman API is only running with user privileges. And, because Podman uses socket activation, it only runs when something is actively talking to it.


Sometimes it's possible to not use the Podman API at all. Convert the compose file to quadlet files with the command-line tool podlet and start the container with "systemctl --user start myapp.service". Due to the fork/exec architecture of podman, the container can then be started without using the Podman API.


Yes, either quadlet or handwritten podman CLI in .service files is the way to go. I don't like using generate-systemd because it hides the actual configuration of the container, I see no point in being stateful...


Trying to be genuinely helpful here:

After many years of "I want stability and evergreen", I finally realized that this is Fedora. Each release is very stable, and they arrive more often than once an eon.


You mean curl?


You can always edit the file in the container and re-upload it with a different tag. That's not best practice, but it's not exactly sorcery.


It's not, but at that point you're giving up on most of the things Docker was supposed to get you. What about when you need to upgrade a library dependency (but not all of them, just that one)?


I'm not sure what the complication here is. If application code changes, or some dependency changes, you build a new docker image as needed, possibly with an updated Dockerfile as well if that's required. The Dockerfile is part of the application repo and versioned just like everything else in the repo. CICD helps build and push a new image during PRs, or tag creation, just like you would with any application package / artifact. Frequent building and pushing of docker images can over time start taking up space of course but you can take care of that by maybe cleaning out old images from time to time if you can determine they're no longer needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: