Hacker Newsnew | past | comments | ask | show | jobs | submit | AdieuToLogic's commentslogin

>> You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at.

> I don't understand this argument. I mean the same applies for books. All books teach you what has come before. Nobody says "You can't make anything truly radical with books". Radical things are built by people after reading those books.

Books share concepts expressed by people understanding those concepts (or purporting to do so) in a manner which is relatable to the reader. This is achievable due to a largely shared common lived experience as both parties are humans.

In short, people reason, learn, remember, and can relate with each other.

> Why can't people build radical things after learning ...

They absolutely can and often do.

> ... or after being assisted by LLMs?

Therein lies the problem. LLMs are not assistants.

They are statistical token (text) document generators. That's it.


> Therein lies the problem. LLMs are not assistants.

Assisting a person and being an assistant are not synonymous. A cane assists a man while he walks. It is a stick. That's it.


> Assisting a person and being an assistant are not synonymous. A cane assists a man while he walks. It is a stick. That's it.

The difference here is no reasonable person claims a cane can teach someone to walk.


> They are statistical token (text) document generators. That's it.

I don’t know why people post this as some kind of slam dunk.


Not a slam dunk, just a factual statement

At the end of the day, it's true? There are times where that suffices, and times where it doesn't.

It might be true, but it is extremely reductive and used pejoratively.

> That’s not a book, that’s just a soup of letters! That’s not an art, that’s just paint on a sheet! That’s not an instrument, that’s just a bunch of crap glued together!

If you’re edgy cynic and treat anything this way, that’s fine. But if you’re singling out LLMs just because you don’t like them, then you’re a hypocrite.


If you think calling a calculator a calculator is offensive to the point of calling someone a cynic and hypocrite you might be a bit too invested

> There's an important behavioral difference between Scala 2 and 3: in 2, @inline was merely a suggestion to the compiler, whereas in 3, the compiler unconditionally applies the inline keyword. Consequently, directly replacing @inline with inline when migrating from 2 to 3 is a mistake.

This reminds me of a similar lesson C/C++ compilers had to learn with the "register" keyword. Early versions treated the keyword as a mandate. As compiler optimizers became more refined, "register" was first a recommendation and then ultimately ignored.

The C++ inline keyword is treated similarly as well, with different metrics used of course.

EDIT:

Corrected reference to early C/C++ keyword from "auto" to "register".


> The C++ inline keyword is treated similarly as well, with different metrics used of course.

You are thinking of C's inline/static inline.

C++'s "inline" semantics (which are implied for constexpr functions, in-class-defined methods, and static constexpr class attributes) allow for multiple "weak" copies of a function or variable to exist with external linkage. Rather than just an optimization hint it's much more of a "I don't want to put this in any specific TU" these days.


Do you mean the ‘register’ keyword?

My root-cause analysis:

I was visualizing Scala method definitions and associated the language's type inference with keyword use, thus bringing C++'s "auto" keyword to mind when the long-since deprecated "register" keyword was the correct subject.

It would appear LLM's are not the only entities which can "hallucinate" a response. :-D


> Do you mean the ‘register’ keyword?

Yes I did, my bad.


And now we have things like `__attribute__((always_inline))` for GCC where you are completely, 100% sure that you want to inline :).

From the article:

  In distributed systems, there’s a common understanding that 
  it is not possible to guarantee exactly-once delivery of 
  messages.
This is not only a common understanding, it is a provably correct axiom. For a detailed discussion regarding the concepts involved, see the "two general's problem"[0].

To guarantee exactly once processing requires a Single Point of Truth (SPoT) enforcing uniqueness shared by all consumers, such as a transactional persistent store. Any independently derived or generated "idempotency keys" cannot provide the same guarantee.

The author goes on to discuss using the PostgreSQL transaction log to create "idempotency keys", which is a specialization of the aforementioned SPoT approach. A more performant variation of this approach is the "hi/low" algorithm[1], which can reduce SPoT allocation of a unique "hi value" to 1 in 2,147,483,648 times when both are 32-bit signed integers having only positive values.

Still and all, none of the above establishes logical message uniqueness. This is a trait of the problem domain, in that whether two or more messages having the same content are considered distinct (thus mandating different "idempotentcy keys") or duplicates (thus mandating identical "idempotency keys").

0 - https://en.wikipedia.org/wiki/Two_Generals'_Problem

1 - https://en.wikipedia.org/wiki/Hi/Lo_algorithm


> it is a provably correct axiom.

Pedantically, axioms by definition are assumed/defined without proof and not provable; if it is provable from axioms/definitions, it is a theorem, not an axiom.


[flagged]


It's just not what the word axiom means nor how anyone uses it. An axiom is unprovable by definition - is it a thing we accept to be true because it is useful to do so (e.g. there exists an empty set)

"Provably Correct Axiom" is nonsense. An axiom is unprovable.

Just "provably correct" would've been fine. This chess stuff is hilariously pretentious.


> Just "provably correct" would've been fine. This chess stuff is hilariously pretentious.

Apparently a bit of humour when responding to a self-identified pedantic response didn't come across as I thought it would.

Lesson learned.


It's grok-level cringe is what it is.

Sorry but the entire line of argumentation and all it’s chess flavor is miles off the mark. This is not the sound of arguing with someone who studied what they talked about.

> Sorry but the entire line of argumentation and all it’s chess flavor is miles off the mark.

Apparently a bit of humour when responding to a self-identified pedantic response didn't come across as I thought it would.

Lesson learned.


I just nod and keep playing checkers.

> And, of course, a hypothesis is capable of being proven.

No, that's just you not understanding the definition of 'postulate'.


Username checks out

> Username checks out

Of all possible replies in the set of "meaningful and/or contributory", this does not have membership thereof.

At least the other responders had the common courtesy of putting forth some intellectual effort and not resort to a trite Reddit meme.


> A more performant variation of this approach is the "hi/low" algorithm

I am discussing this approach, just not under that name:

> Gaps in the sequence are fine, hence it is possible to increment the persistent state of the sequence or counter in larger steps, and dispense the actual values from an in-memory copy.

In that model, a database sequence (e.g. fetched in 100 increments) represents the hi value, and local increments to the fetched sequence value are the low value.

However, unlike the log-based approach, this does not ensure monotonicity across multiple concurrent requests.


> However, unlike the log-based approach, this does not ensure monotonicity across multiple concurrent requests.

Is this a functional requirement when a system is multi-process? Specifically, a single multi-threaded process could provide monotonic guarantees across all messages processed.

Once 2+ processes are introduced, it is impossible to guarantee monotonicity globally unless a SPoT (a.k.a. "queue") is used for delivery due to variations in message delivery times, performance of machines running each process, etc. An additional implication is there is no provable way to determine if a message received by process "A" logically precedes a message received by process "B" without authoritative information in the messages.

In short, an answer which generalizes the question:

  Can we get the space efficiency for consumers when using 
  monotonically increasing idempotency keys, without 
  hampering performance of multi-threaded producers?
Is to use the "hi/low" algorithm for both a single process multi-threaded solution, by only needing the process to allocate the "hi" value periodically and provide verifiable monotonic values, and also for multi-process (potentially multi-threaded as well) solutions, which ensures each process does not produce duplicate keys while providing a per-process partial ordering guarantee.

It should be noted that when you have bad actors in your system almost all guarantees of all kinds go out the window.

> What specifically happened in June to set this off?

Tariffs implemented by this administration:

  "Inflation has begun to show the first signs of tariff 
  pass-through," said Ellen Zentner, chief economic 
  strategist at Morgan Stanley Wealth Management. "While 
  services inflation continues to moderate, the acceleration 
  in tariff-exposed goods in June is likely the first of 
  greater price pressures to come. The Fed will want to hold 
  steady as it awaits more data."[0]

0 - https://www.reuters.com/business/us-inflation-expected-rise-...

Why did European prices have the same increase then?

> Why did European prices have the same increase then?

Where do Europeans get their DRAM from?

If it is the same handful of companies the US gets their DRAM from, then why would Europeans pay any less? Because the EU is not engaging in the same asinine trade war?

Sounds good in theory, but in practice those same few companies can set prices for markets outside the US to be at/near US prices. It doesn't take much effort for manufacturers to set their prices at or near those of their competitors and rely on an implicit mutually assured destruction[0] understanding.

0 - https://en.wikipedia.org/wiki/Mutual_assured_destruction


> Sounds good in theory, but in practice those same few companies can set prices for markets outside the US to be at/near US prices. It doesn't take much effort for manufacturers to set their prices at or near those of their competitors and rely on an implicit mutually assured destruction[0] understanding.

If a company in A sells a widget W to both the EU and the USA, such that a consumer in the EU and the USA pay the same prices even though the USA has a tariff and Europe doesn't, then the company will make a lot more profit per unit selling all their W in the EU and none in the USA.

I'm not at all sure what's happening at any given moment with the USA's tariffs on anything, given the chaos over there. But let's say W is the set of all things relevant to AI data centres. What this means is that all the data centres are now much cheaper to build in Europe rather than in the USA. Data centres can be put just about anywhere, given they're used over the internet anyway. This means that the companies selling W would have all the demand they want for W in the EU, so they could sell all of their supply of W in the EU, so they could get a higher profit margin on all of it.

I'm not sure how much DC investment money is going to which parts of the world, but I am sure that if all the suppliers stopped shipping to the USA because they could sell as much as they could make everywhere else in the world for more profit (and the same purchaser price after tariffs), I would have heard about it.


> If it is the same handful of companies the US gets their DRAM from, then why would Europeans pay any less?

... because tariffs are paid for by the buyer?

Importing memory from Korea to the US means the importer had to pay a tariff. Importing memory from Korea to Europe means the importer does not have to pay a tariff. The company selling the memory gets exactly the same amount of money in either case.


I'm sure if they didn't keep the prices somewhat similar, you would have a bunch of people in Europe selling RAM to Americans.

> I'm sure if they didn't keep the prices somewhat similar, you would have a bunch of people in Europe selling RAM to Americans.

I was just about to edit my response to the GP to say the same thing. Let's explore this hypothetical situation a bit further.

Suppose there was a DRAM manufacturer named "Acme DRAM" which decided to have a separate pricing schedule for the EU reflecting the lack of insane US tariffs.

Some enterprising entrepreneur in the EU would establish a company in the country having the least US tariffs and resell Acme DRAM to US companies. Surely this would make money hand-over-fist.

Problem is, the US DoJ does not look kindly on this kind of enterprise:

  DOJ also has demonstrated a growing willingness to pursue 
  criminal charges against companies and individuals involved 
  in customs fraud schemes such as the purposeful 
  misclassification of goods, falsifying country-of-origin 
  declarations, and intentionally shipping goods through 
  low-tariff countries. Importers of goods into the U.S. 
  should expect criminal enforcement to accelerate in the 
  coming months and years.[0]
This would then put Acme DRAM in the crosshairs of an already vindictive and erratic US administration, likely to not only hammer the entrepreneur (see above) but to also include tariff ramifications for Acme DRAM as well.

All of this risk in the pursuit of lower profit margins by definition.

0 - https://natlawreview.com/article/what-every-multinational-co...


> Some enterprising entrepreneur in the EU would establish a company in the country having the least US tariffs and resell Acme DRAM to US companies. Surely this would make money hand-over-fist.

Re-badging is a thing that some companies do actually do, despite what the DoJ says.

> Importers of goods into the U.S. should expect criminal enforcement to accelerate in the coming months and years.

I am absolutely not a lawyer, but wouldn't "importers" be the USA residents, not the EU businesses doing the exporting?


I was thinking this would look more like a a bunch of smaller operations selling on Ebay or platforms like that.

Huh? It's not the manufacturers paying the tariffs, its the importer? US tariffs do not affect the margins of the manufacturer.

> Huh? It's not the manufacturers paying the tariffs, its the importer? US tariffs do not affect the margins of the manufacturer.

This is a case of second-order effects[0].

See this post[1] for details.

0 - https://research.gatech.edu/blind-spot-big-decisions-why-sec...

1 - https://news.ycombinator.com/item?id=46144761


The tariffs are bad, but you're simply wrong here - US tariffs do not affect global prices, it's OpenAI buying up production in advance and cornering the market.

There's a 4 month gap between June and October

Afraid not. This has not happened across the board to all components.

> Afraid not. This has not happened across the board to all components.

Afraid so. Industry impact from tariffs does not require them to be applied "across the board to all components." See here[0] for more information.

With erratic massive tariff proclamations, counter-tariffs are to be expected. All it takes for companies to inflate prices is to either:

  A) be provably impacted by tariffs
  B) be opportunistic by being "tariff adjacent"
The net result is directly or indirectly, the tariffs implemented and/or threatened to be so are a significant contributor to electronic component costs.

0 - https://www.tradecomplianceresourcehub.com/2025/12/03/trump-...


>> Support engineer ran customer query through Claude (trained on our public and internal docs) and it very, very confidently made a bunch of stuff up in the response.

> Yeah, LLMs are not really good about things that can't be done.

From the GP's description, this situation was not a case of "things that can't be done", but instead was the result of a statistically generated document having what should be the expected result:

  It was quite plausible sounding and it would have been 
  great if it worked that way, but it didn't.

The core issue is likely not with the LLM itself. Given sufficient context, instructions, and purposeful agents, a DAG of these will not produce such consistently wrong results with good grounding context

There are a lot of devils in the details and there are few in the story


>> We've had automated theorem proving since the 60s.

> By that logic, we've had LLMs since the 60s!

From a bit earlier[0], actually:

  Progressing to the 1950s and 60s

  We saw the development of the first language models.
Were those "large"? I'm sure at the time they were thought to be so.

0 - https://ai-researchstudies.com/history-of-large-language-mod...


> I once upgraded a FreeBSD system from 8 to 12 with a single command.

The command you most likely used is freebsd-update[0]. There are other ways to update FreeBSD versions, but this is a well documented and commonly used one.

> I don’t recall having to reboot — might have needed to.

Updating across major versions requires a reboot. Nothing wrong with that, just clarifying is all.

> Most people who use containers a lot won’t find a home in FreeBSD, and that’s fine. I hope containers never come to the BSD family.

Strictly speaking, Linux containers are not needed in FreeBSD as jails provide similar functionality (better IMHO, but I am very biased towards FreeBSD). My preferred way to manage jails is with ezjail[1] FWIW.

> But then, most people who use FreeBSD know you don’t need containers to run multiple software stacks on the same OS, regardless of needing multiple runtimes or library versions.

I completely agree!

0 - https://docs.freebsd.org/en/books/handbook/cutting-edge/

1 - https://erdgeist.org/arts/software/ezjail/


Thanks for sharing and clarifying those details :)

And yes, jails are way better, but here we are.


> The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.

I completely agree.

Supermicro mobo's with server-grade components combined with aggressive cooling fans/heat sinks running FreeBSD in a AAA data center resulted in two prod servers having uptimes of over 3000+ days. This included dozens of app/jails/ports updates (pretty much everything other than the kernel).


Back when I was a sysadmin (sort of 2007-2010), the preference of a colleague (RIP AJG...) who ran a lot of things before my time at the org, was FreeBSD, and I quickly understood why. We ran Postgres on 6.x as a db for a large Jira instance, while Jira itself ran on Linux iirc because I went with jrockit that ran circles around any JVM at the time. Those Postgres boxes had many years of uptime, locked away in a small colo facility, never failed and outlived the org that got merged and chopped up. FreeBSD was just so snappy, and just kept going. At the same time I ran ZFS on FreeBSD as our main file store for NFS and whatnot, snapshots, send/recv replication and all.

And it was all indeed on Supermicro server hardware.

And in parallel, while our routing kit was mostly Cisco, I put a transparent bridging firewall in front of the network running pfSense 1.2 or 1.3. It was one of those embedded boxes running a Via C3/Nehemiah, that had the Via Padlock crypto engine that pfSense supported. Its AES256 performance blew away our Xeons and crypto accelerator cards in our midrange Cisco ISRs - cards costing more than that C3 box. It had a failsafe Ethernet passthrough for when power went down and it ran FreeBSD. I've been using pfSense ever since, commercialisation / Netgate aside, force of habit.

And although for some things I lean towards OpenBSD today, FreeBSD delivers, and it has for nearly 20 years for me. And, as they say, it should for you, too.


> uptimes of over 3000+ days

Oof, that sounds scary. I’ve come to view high uptime as dangerous… it’s a sign you haven’t rebooted the thing enough to know what even happens on reboot (will everything come back up? Is the system currently relying on a process that only happens to be running because someone started it manually? Etc)

Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.


>> uptimes of over 3000+ days

> Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.

  the only thing we have to fear is fear itself[0]
Worrying about critical process(es) being started manually which will not be restarted if a server is rebooted has the same risk as those same process(es) crashing while the server is operational. Best practice is to leverage the builtin support for "Managing Services in FreeBSD"[1] for deployment-specific critical process(es).

Now if there is a rogue person which fires up a daemon[2] manually instead of following the above, then there are bigger problems in the organization than what happens if a server is rebooted.

0 - https://www.gilderlehrman.org/history-resources/spotlight-pr...

1 - https://docs.freebsd.org/en/books/handbook/config/#configtun...

2 - https://docs.freebsd.org/en/books/handbook/basics/#basics-pr...


Depends how they are built. There are many embedded/real-time systems that expect this sort of reliability too of course.

I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.

So if we are talking about garbage windows servers sure. It's just a question of what is accepted by the customers/users.


> I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.

Tandem servers were legendary for their reliability. I knew h/w support engineers years ago that told me stories like your recounting being able to pull components (such as CPU's) without affecting system availability.


Yep. I once did some contracting work for a place that had servers with 1200+ day uptimes. People were afraid to reboot anything. There was also tons of turnover.

I still remember AJG vividly to this day. He also once told me he was a FreeBSD contributor.

My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed. I tried Red Hat 9, and it was slow as a snail by comparison. For me, the choice was obvious. Later on I was running FreeBSD on my ThinkPad, and I still remember the days of coding on it using my professor's linear/non-linear optimisation library, sorting out wlan driver and firmware to use the library wifi, and compiling Mozilla on my way home while the laptop was in my backpack. My personal record: I never messed up a single FreeBSD install, even when I was completely drunk.

Even later, I needed to monitor the CPU and memory usage of our performance/latency critical code. The POSIX API worked out of the box on FreeBSD and Solaris exactly as documented. Linux? Nope. I had to resort to parsing /proc myself, and what a mess it was. The structure was inconsistent, and even within the same kernel minor version the behaviour could change. Sometimes a process's CPU time included all its threads, and sometimes it didn't.

To this day, I still tell people that FreeBSD (and the other BSDs) feels like a proper operating system, and GNU/Linux feels like a toy.


> My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed.

Wow, this brings back some memories. I remember being on a gig which mandated locked-down Windows laptops, but VMWare was authorized.

So I fired up FreeBSD inside VMWare running X with fluxbox[0] as the window manager. Even with multiple rxvt terminals and Firefox running, the memory used by VMWare was less than MS-Word with a single empty document!

0 - https://fluxbox.org/


All hail the mighty Wombats!

The "completely drunk" comment made me chuckle, too familiar... poor choices, but good times!

This is more about OpenBSD, but worth mentioning that nicm of tmux fame also worked with us in the same little office, in a strange little town.

AJG also made some contributions to Postgres, and wrote a beautiful, full-featured web editor for BIND DNS records, which, sadly, faded along with him and was eventually lost to time along with his domain, tcpd.net, that has since expired and was taken over.


> This predates that. It's written in Ada.

Does it count if, combining the two, one could infer a musical scale[0] such as:

  Ada (ay-dah), Scala (ska-lah)
Bonus points for Latin aficionados:

  The word "scale" originates from the Latin scala,
  which literally means "ladder".
:-)

0 - https://en.wikipedia.org/wiki/Scale_(music)


Every Good Boy…

Crazy how these two disciplines are so intertwined.


> Every Good Boy…

  Do Re Mi Fa So La Ti Do, So Do ... compile please!
:-D

EDIT:

Seriously, though, what both music and fundamentally sound programming languages have in common is math. Elegantly defined versions of both are beautiful expressions of thought.


>> Every significant language became multi-paradigm these days, but you can do it intentionally, like Scala, or you can do it badly.

> Python is multi paradigm, but does several things really well that other ecosystems do not.

Both Perl and Ruby can be, and often are, used instead of Python to great success for similar concerns. IOW, the three are often Liskov substitutable[0].

> Javascript as well.

You're kidding, right?

> What claim to fame does Scala have in this regard ...

Scala supports declarative, generative, imperative, meta, and object-oriented paradigms. All of which are supported by at least, but not limited to, the JVM and JavaScript runtimes.

These capabilities transcend libraries (such as Akka) and/or frameworks (such as Spark).

0 - https://en.wikipedia.org/wiki/Liskov_substitution_principle


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: