Hacker Newsnew | past | comments | ask | show | jobs | submit | naasking's commentslogin

Reich's lab actually found evidence of meaningful genetic changes that improved intelligence over the past 10,000 years, but not so much prior to that:

https://www.biorxiv.org/content/10.1101/2024.09.14.613021v1

The advent of agriculture and civilization had many powerful selection effects.


> You don't get that many iterations in the real world though

True, for iterations between the same two players, but humans evolved the ability to communicate and so can share the results of past interactions through a network with other agents, aka a reputation. Thus any interaction with a new person doesn't start from a neutral prior.


> Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1].

Conjecture. There are plenty of ethical frameworks grounded in pure logic (Kant), or game theory (morality as evolved co-operation). These are both amenable to algorithmic implementations.


> There are plenty of ethical frameworks grounded in pure logic (Kant), or game theory (morality as evolved co-operation). These are both amenable to algorithmic implementations.

Algorithm implementations are programmatic manifestations of mathematical models and, as such, are not what they model by definition.

To wit, NOAA hurricane modelling[0] are obviously not the hurricanes which they model.

0 - https://www.aoml.noaa.gov/hurricane-modeling-prediction/


> Algorithm implementations are programmatic manifestations of mathematical models and, as such, are not what they model by definition.

This is false for constructs of information, ie. a "manifested model" of a sorted list is a sorted list and a "manifested model" of a sorting algorithm is a sorting algorithm.

To wit, an accurate algorithmic model of moral reasoning is moral reasoning, since moral reasoning, being a decision procedure, is an information process.


> Algorithm implementations are programmatic manifestations of mathematical models and, as such, are not what they model by definition.

Rofl. Someone hasn't discovered Functionalism or the identity of indiscernables. Must be hard laboring under such a poverty of reasoning.


I don't think old prompts would become useless. A few studies have shown that prompt crafting is important because LLMs often misidentify the user's intent. Presumably an AI that is learning continuously will simply get better at inferring intent, therefore any prompts that were effective before will continue to be effective, it will simply grow its ability to infer intent from a larger class of prompts.

> Atmospheric entry (if that's what you mean) is irrelevant.

I think the OP meant that Earths magnetic field and atmosphere shields any terrestrial matter far more than than a bare asteroid that has no such protections, so it seems implausible at first glance that these things would develop or survive in open space rather than here.


> it seems implausible at first glance that these things would develop or survive in open space rather than here.

I don't think "organics developed in the vacuum of space" is implied. Survived? Well we have samples now confirming, if I'm understanding the basis for the discussion (the article).


We have some organic ‘building block’ compounds confirmed frozen on some asteroids.

But what we don’t have is any examples of them surviving re-entry.

We also have a massive amount of those same compounds already here on the planet.

Causality is… tenuous. But not impossible.


Causality was not the point. The point was to refute the seeding hypothesis, and because they found those molecules, the effort to falsify the hypothesis failed. Now we can move on to the next attempt to refute, which, as you say, might be to study whether molecules can survive conditions of reentry.

Experiments do not tell us that something IS a certain way; only the ways it is not.


The ideal situation for an expert is to prove causality!

It’s nearly impossible, but it is the holy grail!

This experiment was to try to falsify one theory, yes, but as you note that is a very long way from the actual goal - or the level of certainty that the article is trying to imply.

These articles are written due to funding needs, which is why the articles are the way they are - and why the scientists themselves are likely cringing too when they read these articles. At least until the checks (hopefully) arrive.


I was under the impression that the ejection of these compounds demonstrates that organics (blocks) can escape a gravity well, which implies they can likely re-enter another.

> And we're sort of back to square 1.

Specifications are smaller than the full code, just as high level code is smaller than the functionally equivalent assembly. As we ascend the abstraction ladder the amount of reading a human needs to do decreases. I don't think this should really count as "back to square 1".


That has always been the perceived promise of higher-abstraction software specs: automated code generation from something higher-level, thus making programmers increasingly obsolete.

  binary => hexadecimal instructions
  hexadecimal => assembly language
  assembly => portable, "high-level" languages (C, FORTRAN, COBOL, etc.)
  HLLs => 3GLs (BASIC, C++, Pascal, Java, C#, JavaScript, etc.)
  3GLs => 4GLs/DSLs/RADs and "low-code/no-code"[0]
Among the RADs is Microsoft Visual Basic, which along with WinForms and SQL was supposed to make business programmers nearly obsolete, but instead became a new onramp into programming.

In particular, I'd like to highlight UML, which was supposed to mostly obsolete programming through auto-generated code from object-oriented class diagrams.[1] The promise was that "business domain experts" could model their domain via visual UML tooling, and the codegen would handle it from there. In practice, UML-built applications became maintenance nightmares.

In every one of these examples, the artifact that people made "instead of programming" became the de-facto programming language, needing to be maintained over time, abstracted, updated, consumed behind APIs, etc. -- and programmers had to be called in to manage the mess.

It's interesting that Spec4 can be auto-generated, then used to generate code. My question is - what do you do when you have (a) consumers depending on a stable API, and (b) requests for new features? Maybe hand the job to Claude Code or a human developer with a suite of unit tests to guarantee API compatibility, but at that point we're back to an agent (LLM or human) doing the work of programming, with the Spec4 code as the programming language being updated and maintained.

[0] https://en.wikipedia.org/wiki/Fourth-generation_programming_...

[1] https://news.ycombinator.com/item?id=26934795


You can easily run a quant of this on a DGX Spark though. Seems like a small investment if it meaningful improves Lean productivity.

Is it though?

Most people I know that use agents for building software and tried to switch to local development, every single time they switch back to Claude/codex.

It's just not worth it. The models are that much better and continue to get released / improve.

And it's much cheaper unless you're doing like 24/7 stuff.

Even on the $200/m plan, that's cheaper than buying a $3k dgx or $5k m4 max with enough ram.

Not to mention you can no longer use your laptop as a laptop as the power draw drains it - you'd need to host separately and connect


A single DGX Spark can service a whole department of mathematicians (or programmers), and you can cluster up to 4 of them them to fit very large models like GLM-5 and quants of Kimi K2.5. This is nearing frontier-level model size.

I understand the value proposition of the frontier cloud models, but we're not as far off from self-hosting as you think, and it's becoming more viable for domain-specific models.


That's great news- I wonder if that will help drive cloud costs down too

Does a warrant ever expire? How long can they monitor you once the warrant is issued? Do they ever have to notify you or anyone else that you were being monitored and they found no criminal conduct? Don't you see the potential for abuse here?

All of these questions, and more, are answered by examining what happens with phone taps. Phone taps, which historically were treated precisely the same, and further, there was only ever one phone company in a region back then.

All legislative change is interpreted by courts. So to answer your questions:

# look to see how the legislation is written for phone taps

# know that this new legislation is changing things, the code is being modified

# now look at judicial decisions, and you will have your answer

Seeing as you have no idea how other warrants work, when they expire, you're really just looking for the worst case scenario, without even attempting to see what would happen, and has happened for 100+ years.

Yes?



Of course vibe coding is going to be a headache if you have very particular aesthetic constraints around both the code and UX, and you aren't capable of clearly and explicitly explaining those constraints (which is often hard to do for aesthetics).

There are some good points here to improve harnesses around development and deployment though, like a deployment agent should ask if there is an existing S3 bucket instead of assuming it has to set everything up. Deployment these days is unnecessarily complicated in general, IMO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: