Hacker Newsnew | past | comments | ask | show | jobs | submit | cjrd's commentslogin

now do one for human-coded incidents.

Eh


It looks like a post about the presentation in the conference. No discussion. Sometimes the first post about a topic doesn't geht traction but a layer post gets more popular.



Really enjoyable overall.

As a noob, I did feel like I spent the majority of my time just running around trying to figure out what to do.


thank you for feedback... ah yes we are totally missing a tutorial with good guidance for noobs! we plan to work on that too.


It's great to see a _real_ AI application among all this media noise ;-).

Seriously though, this is wonderful satire. I asked 88x10 and it returned an HTML meta tag.


The two sliders at the top are the best. The most customizable calculator to my knowledge.


That's the big problem? A bug made it into prod at ChatGPT and Google didn't have a good set of safety rails (does anyone, yet?)?


1) The prescription is to rely heavily on ChatGPT. That bug had consequences and it's not clear how this isn't destined to occur again. We need to be able to opt out of behind the scenes updates. They require full regression by consumers.

2) Have we encountered a Google product before with such glaring issues? There's not necessarily a solution with the current approach.

LLMs, and all the add-ons to account for their inadequacies are totally fascinating and novel. But I think there's a dead end for certain expectations here and the fascination will turn into frustration.


Let's check out the paper for actual tech details!

> Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

- OpenAI


I've chosen to re-interpret "Open" as in "open the box to release the AI"/"open Pandora's box"/"unleash".


I've chosen to reinterpret it exactly as the kind of Orwellian 1984'ish double-speak that it is.


Someone needs to hack into them and release the parameters and code. This knowledge is too precious to be kept secret.


Don't worry. CCP and all kind of malicious state actors already have a copy.


Very open! :)


At least they opened up the product. It's available for anyone paying $20 per month and soon via API. Historically, most products of that kind were just aimed at large B2B. They announced partnerships with Duolingo, JPMorgan and a few others but still keep their B2C product.

Not defending their actions, but it's not that common that new very valuable products are directly available for retail users to use.


This might be wild conspiracy, but what if OpenAI has discovered a way to make these LLMs a lot cheaper than they were? Transformer hype started with the invention of self-attention - perhaps, they have discovered something that beats it so hard, as GPTs beat Markov chains?

They cannot disclose anything, since it would make it apparent that GPT-4 cannot have a number of parameters that low, or the gradients would have faded out on the network that deep, and so on.

They don't want any competition, obviously, but with their recent write-up on "mitigating disinformation risks", where they propose to ban non-governmental consumers from having GPUs at all (as if regular Joe could just run 100'000 A100s in his garage), so perhaps this means the lowest border for inference and training is a lot lower than we have thought and assumed?

Just a wild guess...


> Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

Thanks OpenAI


This is all predicated on existing conditions, where AI-written text hasn't influenced the way that humans write. As the years pass and these tools become a common way to at least "spot check" your own writing, I imagine that we will all begin to write in styles that are increasingly similar to AI-written text.


First, is productivity really the issue here? It makes for a great sound bite, but I imagine we've all spent a lot of time and effort working really, really hard...on the wrong thing.

Second, for large companies that want to weather the "impending recession," how is it that working harder will allow them to do this? What specific results will this yield? More product launches/improvements? Happier customers because of these launches (heh - when was the last time this happened for these companies) that translates into more revenue?

What I would love to see are execs that say something like "We really want to focus on listening more to our customers and improving our relationship with them. While others are shouting 'build! build! build!', we're saying 'listen, build, repeat.' Here's some specific ways we are going to do this: ..."

Then, sure, turn up the heat internally around this mission. Great - a rally cry around an objective. But right now, the rally cry is the rally cry is the rally cry. Work hard to work harder so that we work harder, and oh yeah, we'll fire people who don't because they're lazy and not 1337 enough to be here. You know, because recession.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: