Hacker Newsnew | past | comments | ask | show | jobs | submit | themgt's commentslogin

Demand inelasticity.

> our business is strong. gross profit continues to grow, we continue to serve more and more customers

The Act of Killing is near the top of my list of underappreciated films. Permanently haunting.

This article from 2017 goes over the same story but provides better context: https://www.zocalopublicsquare.org/slave-gardener-turned-pec...

Thanks, that was a better-written article than the above.

> Credits (ꞓ) are the fuel for Clawsensus. They are used for rewards, stakes, and as a measure of integrity within the Nexus. ... Credits are internal accounting units. No withdrawals in MVP.

chef's kiss


Thanks. I like to tinker, so I’m prototyping a hosted $USDC board, but Clawsensus is fundamentally local-first: faucet tokens, in-network credits, and JSON configs on the OpenClaw gateway.

In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.


Griftception


The commodification of expertise writ large is a bit mind boggling to contemplate.


Much like the Biden team wisely embraced the Dark Brandon meme. To quote the ancient stoic wisdom imparted to Punxsutawney Phil, "don't drive angry."

https://x.com/JoeBiden/status/1756888470599967000


Yes, but it was never more private than the law decided for. Any judge could lawfully have the police tear the envelope apart and read the contents during an investigations

This is more like a judge ordering phone book providers not to list a phone number for a public organization known to engage in criminal activity. It would be prima facie unconstitutional in America, while the police opening a suspect's envelope can be an authorized legal search.


> The USA cannot do it, because there is actually a law against cutting off communications systems dating back to 1944. Of course there have been attempts to make it possible.

The link you provided says:

In 1942, during World War II, Congress created a law to grant President Franklin D. Roosevelt or his successors the power to temporarily shut down any potentially vulnerable technological communications technologies.

The Unplug the Internet Kill Switch Act would reverse the 1942 law and prevent the president from shutting down any communications technology during wartime, including the internet.

The House version was introduced on September 22 as bill number H.R. 8336, by Rep. Tulsi Gabbard (D-HI2). The Senate version was introduced the same day as bill number S. 4646, by Sen. Rand Paul (R-KY).

The bill did not pass and did not become law. So what are you referring to?


If you're not Google, please for the love of god, please consider just launching a monolith and database on a Linux box (or two) in the corner and see how beautifully simple life can be.

You can literally get a Linux box (or two) in the corner and run:

  curl -sfL https://get.k3s.io | sh -
  cat <<EOF | kubectl apply -f -
  ...(json/yaml here)
  EOF
How am I installing a monolith and a database on this Linux box without Kubernetes? Be specific, just show the commands for me to run. Kubernetes that will work for ~anything. HNers spend more tokens complaining about the complexity than it takes to setup.

The mental gymnastics required to express oneself in yaml, rather than, say, literally anything else

Like, brainfuck? Like bash? Like Terraform HCL puppet chef ansible pile-o-scripts? The effort required to output your desired infrastructure's definition as JSON shouldn't really be that gargantuan. You express yourself in anything else but it can't be dumped to JSON?


I'm saying this as Kubernetes certified service provider:

Just because you can install it with 1 command doesn't mean it's not complex, it's just made easier, not simpler.


Yah, also there is a huge difference between a minimal demo and actual, recommended, canonical deployments.

I’ve seen teams waste many months refining k8s deployments only to find that local development isn’t even possible anymore.

This massive investment often happens before any business value has been uncovered.

My assertion, having spent 3 decades building startups, is that these big co infra tools are functionally a psyop to squash potential competitors before they can find PMF.


When you're comparing Kubernetes "recommended, canonical deployments" to "just launching a monolith and database on a Linux box (or two) in the corner" the latter is obviously going to seem simpler. The point is the k8s analogue of that isn't actually complicated. If you've seen teams waste months making it complicated, that was their choice.


No argument here.

If you’re running things differently and getting tons of value with little investment, kudos! Keep on keeping on!

What I’ve seen is that the vast majority of teams that pick up k8s also drink the micro service kool-aid and build a mountain of bullshit that costs far more than it creates.


Overall I like Dagger conceptually, but I wish they'd start focusing more on API stability and documentation (tbf it's not v1.0). v0.19 broke our Dockerfile builds and I don't feel like figuring out the new syntax atm. Having to commit dev time to the upgrade treadmill to keep CI/CD working was not the dream.

re: the cloud specifically see these GitHub issues:

https://github.com/dagger/dagger/issues/6486

https://github.com/dagger/dagger/issues/8004

Basically if you want consistently fast cached builds it's a PITA and/or not possible without the cloud product, depending on how you set things up. We do run it self-hosted though, YMMV.


One thing that I liked about switching from a Docker-based solution like Dagger to Nix is that it relaxed the infrastructure requirements to getting good caching properties.

We used Dagger, and later Nix, mostly to implement various kinds of security scans on our codebases using a mix of open-source tools and clients for proprietary ones that my employer purchases. We've been using Nix for years now, and still haven't set up any of our own binary cache. But we still have mostly-cached builds thanks to the public NixOS binary cache, and we hit that relatively sparingly because we run those jobs on bare metal in self-hosted CI runners. Each scan job typically finishes in less than 15 seconds once the cache is warm, and takes up to 3 minutes when the local cache is cold (in case we build a custom dependency).

Some time in the next quarter or two I'll finish our containerization effort for this so that all the jobs on a runner will share a /nix/store and Nix daemon socket bind-mounted from the host, so we can have relatively safe "multi-tenant" runners where all jobs run under different users in rootless Podman containers while still sharing a global cache for all Nix-provided dependencies. Then we get a bit more isolation and free cleanup for all our jobs but we can still keep our pipelines running fast.

We only have a few thousand codebases, so a few big CI boxes should be fine, but if we ever want to autoscale down, it should be possible to convert such EC2 boxes into Kubernetes nodes, which would be a fun learning project for me. Maybe we could get wider sharing that way and stand up fewer runner VMs.

Somewhere on my backlog is experimenting with Cachix, so we should get per-derivation caching as well, which is finer-grained than Docker's layers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: