Hacker Newsnew | past | comments | ask | show | jobs | submit | adamsmith143's commentslogin

Are you also a professional Astronomer that can accurately vet his claims and research?


One does not require the other


My own observation while using the OpenAI API is that it is massively rate limited due to the enormous demand. So while the public hype may have lessened the actual demand for their core product is so large that its problematic for users.


The other take here would be that apparently it’s too hard (or too expensive) to scale up to demand.


Generally have been satisfied with Anker, though that stuff is made in China the chargers and batteries have been good. Monoprice used to be good but I haven't used anything from them in a while


Yeah man they have teams on standby to adjust the model whenever a random unknown author posts something on obscure pre-print servers. Then they spend hundreds of thousands of compute $ to improve the model on that one metric the paper attacks.


Have you tried a similar question with different parameters?

It's pretty easy if you assume people are checking the exact same quote.


>You had time to respond though, what a silly (and elitist rebuttal).

What a dumb take. It probably takes seconds to write a simple comment and far longer to read a paper.

>You argue by authority that yes, it actually does reason, which is a far (far) bolder claim than the one OP is making.

He linked a paper you can read yourself. How is this arguing from authority?


The paper makes detailed and reasoned arguments, and the GP quickly dismisses it without such.


> It probably takes seconds to write a simple comment and far longer to read a paper.

Unfortunately that is true, it doesn't mean you have to.

The paper he linked to doesn't rebut OP, they show that prompting GPT-4 to provide reasoning makes it provide better answers. That is a different statement than "GPT-4 is actually reasoning", and can do so consistently on novel problems.


You should probably think about why you think that making the model output reasoning steps which lead it to correctly answer questions it couldn't before is not somehow equivalent to reasoning.


The probability that a Stochastic Parrot returns coherent reasoning seems vanishingly small.


Are you saying that GPT is not a stochastic parrot, or that GPT is not returning coherent reasoning?

Because if it's the latter, the evidence is rather against you. People seem to like to cherry-pick examples of where GPT gets reasoning wrong, but it's getting it right enough millions of times a day that people keep using it.

And it's not as if humans don't get reasoning wrong. In fact the humans who say GPT can't reason are demonstrating that.


why do you say that? You don't think stochastic pattern matching can feature reasoning as an emergent property? I do.

A stochastic parrot doesn't just mimic things totally randomly. It reinforces what it's seen.


I keep getting surprised at how a large chunk of HN's demographic seemingly struggles with the simple notion that a black box's interface informs surprisingly little about its content.

I'm not saying that GPT-4 is reasoning or not, just that discounting the possibility solely based on it interfacing to the world via a stochastic parrot makes no sense to me.


Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.

Here's a silly example I thought of. We can ask whether a certain bird is capable of "sorting". We can place objects of different sizes in front of the bird, and we observe that the bird can rearrange them in order of increasing size. Does it matter what internal heuristics or processes the bird is using? If it sorts the objects, it is "sorting".

To me, it seems perfectly obvious that GPT-4 is reasoning. It's not very good at it and it frequently makes mistakes. But it's also frequently able to make correct logical deductions. To me this is all stupid semantic games and goalpost-moving.


> Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.

Yes, that's my point exactly.


Replace forb and tworby.

How common is the pattern? I would expect quite common. So if one can do some replacement, it could solve it just by replacing right words.


"Stochastic Parrot" is a really tired take and none of the major players, Ilya Sutskever, Andrej Karpathy, etc. believe that's all these models are doing.


Quartz = SiO2 --> Silicon and Oxygen QED


With you there. A lot of Chinese labs are listed e.g. but my own experience trying to replicate materials science results from "credible" Chinese labs has led to a lot of disappointment.


The proposed mechanism for superconductivity in lk-99 is pretty different so that may be confounding your interpretation of those graphs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: