Hacker Newsnew | past | comments | ask | show | jobs | submit | n00b101's commentslogin

The trusty laws of thermodynamics strike again


I think you nerds need to stop reading obsolete academic fad papers from 1985. Imagine if your girlfriend was unironically reading articles of Cosmo from 1985 to figure out what to wear.

A computer program is a "model" of some thing. For example:

    float m = 1e10f;
    float a = 9.8f;
    float F = m*a;
Another example:

    if(employee is still employed):
       float paycheque = getSalary(employee);

    else: 
       float paycheque = 0.00f;


Fashion changes quickly over time, while good models of real-life processes are infrequently supplanted.

For your argument to work, you need to prove that the original article is closer to a 1985 Cosmo article than it is to something like Clayton Christensen's 1995 article on Disruptive Innovation, which remains relevant today (or disprove one of the premises in my comment).


Sometimes there's a glitch and the employee continues to get paid after being laid off.


It's not unreasonable to explore historical eras for fashion inspiration.


Consider yourself lucky.

My public high school in Ontario was supposed to be a "magnet school for the gifted" and instead turned out to be a scam.

The computer class teacher was absent for a year, and the substitute teacher insisted that the keyboard and mouse cords should be neatly arranged at the end of each class as if it was a knitting class. The "coursework" consisted of learning how to type out "business memos" using a word processor.

The school believed that this was an important skill and imagined that we would be writing "memos" on computers and printing them out in the "business world."

I skipped every class I could to hang out with my girlfriend and got out with a 2.0 GPA.

The school in question has since been demolished. The whole scam was to try to prevent the school from being demolished due to low performance, so they pretended to be a "magnet school for the gifted."


My secondary school changed headmaster in 1990. The new headmaster declared that "computers were a passing fad" and ended all IT lessons.


Is anyone surprised by this? Do any guys remember women getting higher grades in college?


They get this from the media they watch. What child has seen literal scientists in lab coats? I might have seen it a few times at university at the clean lab, not even sure about that.


FYI, it would take approximately 99.3 billion years to complete the Hamiltonian circuit of the Rubik's cube’s quarter-turn metric Cayley graph using the GAN 12 Maglev UV Coated 3x3 Rubik's cube.


Location: Toronto, ON

Remote: Yes

Willing to relocate: Yes

Technologies: Python, Machine Learning, C/C++, SQL, HTML/CSS/JS, Node.js, Embedded Computing (Arduino, FPGA)

Résumé/CV: https://eclipse-consulting.github.io/cv.pdf

Email: short.gamma@icloud.com

Portfolio: https://eclipse-consulting.github.io/

I am a full-stack software engineer with a background in applied mathematics, high performance computing and real-time systems. Currently working on a side project involving numerical computing and AI models.

I am interested in full-time and/or contract opportunities.


Can anyone explain the layout and formatting of the slides?


Ah, Professor Vardi, a fascinating case study in our department. His devotion to the 'science' in computer science is truly something to behold. It's not every day you see someone try to reconcile Turing machines with the second law of thermodynamics ...

Dr. Vardi's Second Law of Thermodynamics for boolean SAT and SMT (Satisfiability Modulo Theory) solvers is truly a marvel of interdisciplinary ambition. In his framework, computational entropy is said to increase with each transition of the Turing machine, as if bits themselves somehow carry thermodynamic weight. He posits that any algorithm—no matter how deterministic—gradually loses "information purity" as it executes, much like how heat dissipates in a closed system. His real stroke of genius lies in the idea that halting problems are not just undecidable, but thermodynamically unstable. According to Dr. Vardi, attempts to force a Turing machine into solving such problems inevitably lead to an "entropy singularity," where the machine's configuration becomes so probabilistically diffuse that it approaches the heat death of computation. This, he claims, is why brute-force methods become inefficient: they aren’t just computationally expensive, they are thermodynamically costly as well. Of course, there are skeptics who suggest that his theory might just be an elaborate metaphor stretched to breaking point—after all, it’s unclear if bits decay in quite the same way as particles in a particle accelerator.


I have to say that reading this from a non expert point of view leaves me wondering if this comment is true or if it is just the result of some elaborate prompt on ChatGPT


Did Vardi write about this? I only found some other authors instead; is it possible you are referring to Yuri Manin instead? :

From https://arxiv.org/pdf/1010.2067 "Manin and Marcolli [20] derived similar results in a broader context and studied phase transitions in those systems. Manin [18, 19] also outlined an ambitious program to treat the infinite runtimes one finds in undecidable problems as singularities to be removed through the process of renormalization. In a manner reminiscent of hunting for the proper definition of the “one-element field” F_un, he collected ideas from many different places and considered how they all touch on this central theme. While he mentioned a runtime cutoff as being analogous to an energy cutoff, the renormalizations he presented are uncomputable. In this paper, we take the log of the runtime as being analogous to the energy; the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit."


No. It's humour.

There is no such thing as the Second Law of Thermodynamics of a Turing Machine.

Unless! You turn the machine off. Then energy input equals zero, it becomes a closed system, and entropy kicks in.


Why do you write like this?


### *Formalization and Implementation*: While the paper lays out a theoretical framework, its practical implementation may face significant challenges. For instance, generating meaningful mathematical conjectures is far more abstract and constrained than tasks like generating text or images. The space of potential theorems is vast, and training an AI system to navigate this space intelligently would require further breakthroughs in both theory and computational techniques.

### *Compression as a Measure of Theorem Usefulness*: The notion that a good theorem compresses provable statements is intriguing but may need more exploration in terms of practical utility. While compression aligns with Occam's Razor and Bayesian learning principles, it's not always clear whether the most "compressed" theorems are the most valuable, especially when considering the depth and complexity of many foundational theorems in mathematics.

### *Human-AI Collaboration*: The paper lightly touches on how this AI mathematician might work alongside humans, but the real power of such a system might lie in human-AI collaboration. A mathematician AI capable of generating insightful conjectures and proofs could dramatically accelerate research, but the interaction between AI and human intuition would be key.

### *Computational and Theoretical Limits*: There are also potential computational limits to the approach. The "compression" and "conjecture-making" frameworks proposed may be too complex to compute at scale, especially when considering the vast space of possible theorems and proofs. Developing approximation methods or heuristics that are effective in real-world applications will likely be necessary.

Here's how we can unpack this paper:

### *System 1 vs. System 2 Thinking*: - *System 1* refers to intuitive, fast, and automatic thinking, such as recognizing patterns or generating fluent responses based on past experience. AI systems like GPT-4 excel in this area, as they are trained to predict and generate plausible content based on large datasets (e.g., text completion, language generation). - *System 2* refers to deliberate, logical, and slow thinking, often involving reasoning, planning, and making sense of abstract ideas—such as solving a mathematical proof, engaging in formal logic, or synthesizing novel insights. The claim that AI lacks System 2 abilities suggests that while AI can mimic certain behaviors associated with intelligence, it struggles with tasks that require structured, step-by-step reasoning and deep conceptual understanding.

### "Not so much in terms of mathematical reasoning"

The claim is *partially true*, but it must be put into context:

   - **Progress in AI**: AI has made **tremendous advances** in recent years, and while it may still lack sophisticated mathematical reasoning, there is significant progress in related areas like automated theorem proving (e.g., systems like Lean or Coq). Specialized systems can solve well-defined, formal mathematical problems—though these systems are not general-purpose AI and operate under specific constraints.

   - **Scope of Current Models**: General-purpose models like GPT-4 weren't specifically designed for deep mathematical reasoning. Their training focuses on predicting likely sequences of tokens, not on formal logic or theorem proving. However, with enough specialized training or modularity, they could improve in these domains. We’ve already seen AI systems make progress in proving mathematical theorems with reinforcement learning and imitation learning techniques.

   - **Frontiers of AI**: As AI continues to develop, future systems might incorporate elements of both System 1 and System 2 thinking by combining pattern recognition with symbolic reasoning and logical processing (e.g., systems that integrate neural networks with formal logic solvers or reasoning engines).

### Conclusion: AI excels in tasks involving intuitive, pattern-based thinking but struggles with deliberate, goal-oriented reasoning required for deep mathematical work. However, as research evolves—especially in hybrid models that combine deep learning with symbolic reasoning and formal logic—these limitations may become less pronounced.

The future of AI may very well involve systems that are capable of the same level of mathematical reasoning (or better) as "human experts."


Location: Toronto, ON

Remote: Yes

Willing to relocate: Yes

Technologies: Python, Machine Learning, C/C++, SQL, HTML/CSS/JS, Node.js, Embedded Computing (Arduino, FPGA)

Résumé/CV: https://eclipse-consulting.github.io/cv.pdf

Email: shortgamma@icloud.com

Portfolio: https://eclipse-consulting.github.io/

I am a full-stack software engineer with a background in applied mathematics, high performance computing and real-time systems. Currently working on a side project involving numerical computing and AI models.

I am interested in full-time and/or contract opportunities.


FYI the email address you've posted here bounces, at least for me.


Sorry! It's short.gamma@icloud.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: