Hacker Newsnew | past | comments | ask | show | jobs | submit | tyre's commentslogin

> Discord seems to have intentionally softened its age-verification steps so it can tell regulators, “we’re doing something to protect children,” while still leaving enough wiggle room that technically savvy users can work around it.

...source?

I sincerely doubt that Discord's lawyers advocated for age verification that was hackable by tech savvy users.

It seems more likely that they are trying to balance two things:

1. Age verification requirements

2. Not storing or sending photos of people's (children's) faces

Both of these are very important, legally, to protect the company. It is highly unlikely that anyone in Discord's leadership, let alone compliance, is advocating for backdoors (at least for us.)


Usually in cases like this, there is no source, there can’t be. Long long ago, long enough to be past the statute of limitations, I was involved in a similar regulatory compliance situation. We specifically communicated in such a way that “actual effectiveness” wasn’t talked about, and we set that up with a single, verbal only and without recording, meeting between the team and one of the lawyers.

Point is, these kinds of schemes where internal communication is deliberately hobbled to comply maliciously with requirements while still being completely in the clear as far as any actual recorded evidence goes. And there’s always at least one person piping in with a naïve “source?” as if people would keep recorded evidence of their criminal conspiracies.


Do you have reading on it being a stunt? That seems like a huge gamble. You’re basically inviting competitors and pissing off your supply (content creators.)

If they view you as unstable, unreliable, or adversely motivated, they will look for alternatives to at minimum diversify. It’s their livelihood.


I don’t know for sure but it’s been implied that it was an intentional action to garner public outrage at the banks who wanted to stop processing their transactions.

Or they view it as a differentiator and focus on a different segment. As I understand it, this is what Anthropic is doing:

1. Focus on businesses and developers

2. Make money on productivity and API platform

Enterprises are particularly sensitive about their data being farmed (e.g. note that paid Google accounts don’t have their emails used for ads.)

Keeping that trust is not a differentiator and existentially important to Anthropic.


I just don't believe that is likely.

I think that, despite Anthropic's present statements, they will move to ads if ads prove successful for ChatGPT or Gemini.

It would be viewed as "leaving money on the table" by their board and shareholders if they didn't.


this is false and not how things work at the enterprise level. trust is important and it is not lost by showing ads in free tier.

trust is lost in other ways.


It’s difficult to believe that they’ll keep privacy guarantees. Some of the most valuable types of targeting are lookalike audiences or following up from other ads elsewhere.

How would OAI allow them to target without access to de-anonymized data?

Buyers will want to exclude existing customers, which requires the same.

The product managers will have explicit KPIs tied to conversion. At some point, like at Google, this will break. It has to or OAI can’t grow into its current valuation, let alone any future one.


As a neutral observation: it’s remarkable how quickly we as humans adjust expectations.

Imagine five years ago saying that you could have a general purpose AI write a c compiler that can handle the Linux kernel, by itself, from scratch for $20k by writing a simple English prompt.

That would have been completely unbelievable! Absurd! No one would take it seriously.

And now look at where we are.


Now consider how much of the original C compiler's source code it was trained on and still managed to output a worse result?

Proof of just how lossy the compression is.

> a simple English prompt

And that’s where my suspicion stems from.

An equivalent original human piece of work from an expert level programmer wouldn’t be able to do this without all the context. By that I mean all the all the shared insights, discussion and design that happened when making the compiler.

So to do this without any of that context is likely just very elaborate copy pasta.


> Imagine five years ago saying that you could have a general purpose AI write a c compiler that can handle the Linux kernel, by itself, from scratch for $20k by writing a simple English prompt.

You’re very conveniently ignoring the billions in training and that it has practically the whole internet as input.


Indeed, it's the Overton window that has moved. Which is why I secretly think the pro-AI side is more right than the anti-AI side. Makes me sad.

Wasn't there a fair amount of human intervention in the AI agents? My understanding is, the author didn't just write "make me a c compiler in rust" but had to intervene at several points, even if he didn't touch the code directly.

You're right. It's been pretty incredible. It's also frustrating as hell though when people extrapolate from this progress

Just because we're here doesn't mean we're getting to AGI or software developers begging for jobs at Starbucks


Sure then make your prediction? It’s always easy to hand wave and dismiss other people’s predictions. But make yours: what do you think llms can do in 2 years?

You're asking me to do the thing I just said was frustrating haha. I have no idea. It's a new technology and we have nothing to draw from to make predictions. But for the sake of fun..

New code generation / modification I think we're hitting a point of diminishing returns and they're not going to improve much here

The limitation is fundamentally that they can only be as good as the detail in the specs given, or the test harnesses provided to them. Any detail left out they're going to make up, and hopefully it's what you want (often it's not!). If you make the specs detailed enough so that there's no misunderstanding possible: you've just written code, what we already do today

Code optimization I think they'll get quite a bit better. If you give them GCC it's probable they'll be able to improve upon it


> If you make the specs detailed enough so that there's no misunderstanding possible: you've just written code, what we already do today

This was my opinion for a very long time. Having build a few applications from scratch using AI, though, nowadays I think: Sometimes not everything needs to be spelled out. Like in math papers some details can be left to the ~~reader~~LLM and it'll be fine.

I mean, in many cases it doesn't really matter what exactly the code looks like, as long as it ends up doing the right thing. For a given Turing machine, the equivalence class of equivalent implementations is infinite. If a short spec written in English leads the LLM to identify the correct equivalence class, that's all we need and, in fact, a very impressive compression result.


Sometimes, yeah. I don't think we're disagreeing

What I'd also add:

Because of the unspecified behaviour, you're always going to need someone technical that understands the output to verify it. Tests aren't enough

I'm not even sure if this is a net productivity benefit. I think it is? Some cases it's a clear win.. but definitely not always. You're reducing time coding and now putting extra into spec writing + review + verification


> Sometimes, yeah. I don't think we're disagreeing

I would disagree. Formalism and precision have a critical role to play which is often underestimated. More so with the advent of llms. Fuzziness of natural languages is both a strength and weakness. We have adopted precise but unnatural languages (math/C/C++) for describing machine models of the physical world or of the computing world. Such precision was a real human breakthrough which is often overlooked in these debates.


Hmm. It’s not clear what specific task it can’t handle. Can you come up with a concrete example?

Are you saying you've never had them fail at a task?

I wanted to refactor a bunch of tests in a TypeScript project the other day into a format similar to table driven tests that are common in Golang, but seemingly not so much in TypeScript. Vitest has specific syntax affordances for it, though

It utterly failed at the task. Tried many times with increasing specificity in my prompt, did one myself and used it as an example. I ended up giving up and just doing it manually


I see. Did you use Claude code? With access to compiling and running.

Codex on high, yeah it had access to compiling/running

thanks for the data point

Something that looks and sounds impressive but in the end not of much substance.

This will be true for next 2 years, 4 years, next decade, few decades. Until the state of the art ML paradigm remains language models.


I totally agree, but I think a lot of the push-back is that this is presented as better than it actually is.

These are different technologies with different rates of demonstrated growth. They have very little to do with each other.

Well let's check again in two years then.

Having used Heroku at multiple startups during the 2012–2015 years, this is not correct.

With heroku you could `git push heroku master` and it would do everything else from there. The UX was nice, but that was not the reason people chose it. It was so easy compared to running on EC2 instances with salt or whatever. For simple projects, it was incredible.


That's literally the UX I'm talking about and that's what other companies copied too. To be clear, I'm not (just) talking about how heroku.com looks and works, I'm talking about the entire user experience including git push to deploy, so I believe you are agreeing with me here. That is why I said VPS with Dokploy or Coolify and so on have the same UX, both in the command line with git push deploys supported as well as (now, at least) a vastly superior website user experience, akin to Vercel.

Sorry, are you saying that engineers at Anthropic who work on coding models every day hadn’t thought of multiple of them working together until someone else suggested it?

I remember having conversations about this when the first ChatGPT launched and I don’t work at an AI company.


Claude Code has already had subagent support. Mostly because you have to do very aggressive context window management with Claude or it gets distracted.

Looking at the underlying study, this isn’t evidence of bias. It’s evidence of correlation between Republicans and negative sentiment.

If you look at the sentiment for public figures given, the bottom one is, for example, Brett Kavanaugh. Well, he was credibly accused of sexual assault during his confirmation hearings, which was a huge deal at the time. Someone with that on their record will probably be read as negative, but, I mean, not the editors’ fault!


The accusations weren’t particularly credible and similar slander campaigns against people like Joe Biden aren’t nearly as prominent.

Even notorious dictators like Mao Zedong get treated with kid gloves as long as they’re on the left: https://www.tracingwoodgrains.com/p/how-wikipedia-whitewashe...


Kid gloves? The cites text literally says:

> His policies resulted in the deaths of tens of millions of people in China during his reign, mainly due to starvation, but also through persecution, prison labour in laogai, and mass executions

What's "kid gloves" about that?

Let's contrast with the the farthest thing from a leftwing dictator we can find, the quintessential rightwing one, Adolf Hitler. Here's the intro to his Wikipedia page:

> Adolf Hitler[a] (20 April 1889 – 30 April 1945) was an Austrian-born German politician who was the dictator of Germany during the Nazi era, which lasted from 1933 until his suicide in 1945. He rose to power as the leader of the Nazi Party,[b] becoming the chancellor of Germany in 1933 and then taking the title of Führer und Reichskanzler in 1934.[c] Germany's invasion of Poland on 1 September 1939 under his leadership marked the outbreak of the Second World War. Throughout the ensuing conflict, Hitler was closely involved in the direction of German military operations as well as the perpetration of the Holocaust, the genocide of about six million Jews and millions of other victims.

Note how the atrocities are last, same as Mao.


Please actually read the link I shared before responding. For your convenience, I’ve shared it again.

https://www.tracingwoodgrains.com/p/how-wikipedia-whitewashe...


The merger was most likely now because they have to do it before the IPO. After the IPO, there’s a whole process to force independent evaluation and negotiation between two boards / executives, which would be an absolute dumpster fire where Musk controls both.

When they’re both private, fine, whatever.


First thing a public spacex would want to do is sell off all the non-spacex crap

A public SpaceX will still be run by Musk. A public SpaceX would have to sell assets like X for a huge loss given its debt load, which would also take a propaganda machine out of Musk’s hands.

They’re stuck with those assets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: