My own observation while using the OpenAI API is that it is massively rate limited due to the enormous demand. So while the public hype may have lessened the actual demand for their core product is so large that its problematic for users.
Generally have been satisfied with Anker, though that stuff is made in China the chargers and batteries have been good. Monoprice used to be good but I haven't used anything from them in a while
Yeah man they have teams on standby to adjust the model whenever a random unknown author posts something on obscure pre-print servers. Then they spend hundreds of thousands of compute $ to improve the model on that one metric the paper attacks.
> It probably takes seconds to write a simple comment and far longer to read a paper.
Unfortunately that is true, it doesn't mean you have to.
The paper he linked to doesn't rebut OP, they show that prompting GPT-4 to provide reasoning makes it provide better answers. That is a different statement than "GPT-4 is actually reasoning", and can do so consistently on novel problems.
You should probably think about why you think that making the model output reasoning steps which lead it to correctly answer questions it couldn't before is not somehow equivalent to reasoning.
Are you saying that GPT is not a stochastic parrot, or that GPT is not returning coherent reasoning?
Because if it's the latter, the evidence is rather against you. People seem to like to cherry-pick examples of where GPT gets reasoning wrong, but it's getting it right enough millions of times a day that people keep using it.
And it's not as if humans don't get reasoning wrong. In fact the humans who say GPT can't reason are demonstrating that.
I keep getting surprised at how a large chunk of HN's demographic seemingly struggles with the simple notion that a black box's interface informs surprisingly little about its content.
I'm not saying that GPT-4 is reasoning or not, just that discounting the possibility solely based on it interfacing to the world via a stochastic parrot makes no sense to me.
Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.
Here's a silly example I thought of. We can ask whether a certain bird is capable of "sorting". We can place objects of different sizes in front of the bird, and we observe that the bird can rearrange them in order of increasing size. Does it matter what internal heuristics or processes the bird is using? If it sorts the objects, it is "sorting".
To me, it seems perfectly obvious that GPT-4 is reasoning. It's not very good at it and it frequently makes mistakes. But it's also frequently able to make correct logical deductions. To me this is all stupid semantic games and goalpost-moving.
> Isn't "reasoning" a functional property though? If from the outside it performs all the functions of reasoning, it doesn't matter what is happening inside of the black box.
"Stochastic Parrot" is a really tired take and none of the major players, Ilya Sutskever, Andrej Karpathy, etc. believe that's all these models are doing.
With you there. A lot of Chinese labs are listed e.g. but my own experience trying to replicate materials science results from "credible" Chinese labs has led to a lot of disappointment.