Hacker Newsnew | past | comments | ask | show | jobs | submit | BenoitEssiambre's commentslogin

I know doctors who've worked with Marrero and they seem to be split in their opinion. They seem to agree he tends to be "excessively thorough", frequently sending tests to labs across the world. This makes him liked by desperate patients with potentially incurable diseases who want someone to "do something".

They are split on whether his thoroughness is just fueling false hopes and sending patients down unnecessary rabbit holes or if he could have potentially identified a real issue.


Note that the "Labor vs Capital" distinction mostly means "workers vs retirees". The reason more money goes to capital these days is not necessarily that each retiree is getting more but that in an aging population, there's more retirees so it takes more resources diverted from workers to support this larger non working population. This problem can be solved with more babies 20 years ago or more immigration of workers now to share the burden (unless AI makes everything weird).

I undestand what you are saying but retirees are not what people mean when they talk about Capital. They are talking about executives, fund managers, billionaires, and so on. People who actually control much of our society. Yes many of the funds are managing the retirements of working people but that does not necessarily need to be the case, nor do those retirees have any active ownership of the companies those funds invest in.

Right but the majority of people holding significant amounts of capital is retirees or people saving for retirement. There is a small minority of people wealthy for other reasons. It doesn't really make sense to strongly associate these people to "capital" since they are a small minority of capital holders.

There's been a lot written on Labor vs Capital, so I will just suggest you research the topic because you are not on target here.

Do you have a link? What I've seen in most discussions is obscuring of the fact that the majority of "capital" is directly or indirectly retirees or people saving for retirement. Those in the top 5% wealthiest often need to survive on that wealth for decades so it's not as if they have per year spending power that is that high. You too will be at your top percentile wealthiest of your life when you are nearing retirement.

Honestly? Your favorite LLM will be able to build you a syllabus better tailored to fit your current level of understanding. But in short, Capital is the people making decisions about how money is expressed through the economy.

But to make concrete what I am saying. Take a VC fund like Lightspeed that manages tens of billions of dollars. They do this on behalf of LPs, of which the largest are typically large pools of long-term capital like university endowments and pension funds.

Lightspeed, for example, has received ~$400m in investments from CalPERS (retirement program for some California public employees) alone. That represents tens of thousands of employees and former employees. Even if the notional value of one of their retirements is (say) $2m, that person is still Labor. Capital is represented by the people at CalPERS who decide where to allocate money, and by the Lightspeed GPs who decide which startups to fund. A former county attorney who has money in CalPERS has essentially no say over how her retirement funds are invested (if she's on a pension); that decision is down to a relative handful of people she will likely never meet.

A quick way to know the difference: top 5% wealthy people can buy the house next door and make it ugly, being a small nuisance. Capital can crash the entire economy with bad bets or get laws changed/ignored at their behest (we are watching this right now with X's CSAM generation on demand).


Unix and Linux would be your quintessential examples.

Unix was an effort to take Multics, an operating system that had gotten too modular, and integrate the good parts into a more unified whole (book recommendation: https://www.amazon.com/UNIX-History-Memoir-Brian-Kernighan/d...).

Even though there were some benefits to the modularity of Multics (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also its downfall. Multics was eventually deemed over-engineered and too difficult to work with. It couldn't evolve fast enough with the changing technological landscape. Bell Labs' conclusion after the project was shelved was that OSs were too costly and too difficult to design. They told engineers that no one should work on OSs.

Ken Thompson wanted a modern OS so he disregarded these instructions. He used some of the expertise he gained while working on Multics and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder being like "Hey what OS are you using there, can I get a copy?" and the rest is history.

Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.

More here: https://benoitessiambre.com/integration.html


Are you equating success with adoption or use? I would say there are lot's of software that are widely used but are a mess.

What would be a competitor to linux that is also FOSS? If there's none, how do you assess the success or otherwise of Linux?

Assume Linux did not succeed but was adopted, how would that scenario look like? Is the current situation with it different from that?


> What would be a competitor to linux that is also FOSS? If there's none, how do you assess the success or otherwise of Linux?

*BSD?

As for large, successful open source software: GCC? LLVM?


If you click on the link, I mention other competing attempts and architectures, like Multics, Hurd, MacOS and even early Windows that either failed or started adopting Unix patterns.


So which base style and tone simply gives you less sycophancy? It's not clear from their names and description. I'm looking for the "Truthful" personality.


At least nickels should go so we can always round by one digit.


Yeah, the example with the eggs isn't great because an LLM would indeed get the correct interpretation but the thing is, this is based on LLMs having been trained on the context. When and LLM has the context, it is usually able to correctly fill the gaps of vague English specifications. But if you are operating at the bleeding edge of innovation or in depths of industry expertise that LLMs didn't train on, it won't be in a position to fill those blanks correctly.

And domains with less training data openly available are areas where innovation and differentiation and business moats live.

Oftentimes, only programming languages are precise enough to specify this type of knowledge.

English is often hopelessly vague. See how many definitions the word break has: https://www.merriam-webster.com/dictionary/break

And Solomonoff/Kolmogorov theories of knowledge say that programming languages are the ultimate way to specify knowledge.


A CLI might be the most information theoretically efficient form of API, significantly more succinct than eg. JSON based APIs. It's fitting that it would be optimal for Claude Code given the origin of the name "Claude".

Information theoretic efficiency seems to be a theme of UNIX architecture: https://benoitessiambre.com/integration.html.


To add to this, there are fundamental information-theoretic principles that support inlining code and components. It's about reducing code entropy, reducing length and referential distances. https://benoitessiambre.com/entropy.html

The good thing is LLMs try to optimize for information theoretic measures of language so they naturally generate better scoped more inline code. LLMs might help us win this battle :-)


To add to this. There's fundamental theoretical reasons why microservices or bad. They increase the entropy of code (https://benoitessiambre.com/entropy.html) by increasing globally scoped dependencies. They are the global variables or architecture. Having lots of interconnected global variables makes for an unpredictable chaotic system.


Asynchronous queues make your data out of sync (hence the name) and inconsistent one of the main downsides of microservices. Their use should be minimized to cases where they are really necessary. A functional transactional layer like postgres is the solution to make your state of truth accessed in a synchronized, atomic, consistent way.


No, I disagree with that completely actually.

Functions and handlers should not care where data comes from, just that they have data, and a queue is the abstraction of that very idea. Yes, you lose atomicity but atomicity is generally slow and more problematic has a high amount of coupling.

I don’t agree that being out of sync is the main downside of microservices; the main downside is that anything hitting the network is terrible. Latency is high, computers crash, you have to pay a cost of serialization and deserialization, libraries can be inconsistent, and zombie processes that screw up queues. Having stuff in-process being non-synchronized wouldn’t even hit my top five.

ETA:

I should be clear; obviously there are times where you want or need synchronization, and in those cases you should use some kind of synchronization mechanism, like a mutex (or mutex-backed store e.g. ConcurrentHashMap) for in-process stuff or a SQL DB for distributed stuff, but I fundamentally disagree with the idea that this should be the default, and if you design your application around the idea of data flow, then explicit synchronization is the exception.


I'll agree that the network layer adds more problems to microservices, but even with a perfect network, they are problematic. Everything being out of sync, (if they are stateful microservices which queues imply), is one big issue. Things being interconnected in broad global scopes instead of more locally scoped is the other big issue.

The more you have globally interconnected and out of sync states, the less predictable your system is.

The solution is to be as hierarchical, as tightly scoped, as functional and as transactional as you can.

That's how you tackle complexity and create intelligent systems: https://benoitessiambre.com/entropy.html


I think we are at a fundamental disagreement on this.

You can make asynchronous code predictable if you utilize something like TLA+, or treat the code as a protocol system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: