I don't know about Australia, but there's a page here detailing some of the sites that got shut down because of the OSA in the UK: https://onlinesafetyact.co.uk/in_memoriam/
A lot of the arguments I see in this thread are about whether modern mainstream social media are bad for young people. When the debate becomes about that, it's very easy to defend these types of Orwellian laws. It becomes "This is a problem, therefore the solution is good", without questioning the solution itself. I think this type of thinking is demonstrated, or perhaps exploited, very well by this article (I'm not implying the WEF is secretly behind everything, I'm just using this as an example):
The first part of that article is an absolutely scathing, on-point criticism of mainstream social media. I find myself agreeing with everything said, and then, suddenly, seemingly out of nowhere, the article pivots to "therefore we need completely 24/7 mass surveillance of everyone at all times and we need to eradicate freedom of speech". That article is like a perfect microcosm of this entire international shift in internet privacy.
People and their governments seem to agree that modern social media is a problem. The difference is why. The people think it's a problem because it harms people; governments think it's a problem because they don't control it.
I think that the root cause of this shift to mass surveillance is that people in democratic countries still have a 20th-century concept of what authoritarianism looks like. Mass surveillance is like a novel disease that democracies don't yet have any immunity to; that's why you see all these "it's just like buying alcohol" style false equivalences, because an alarming number of people genuinely don't understand the difference between normal surveillance and mass surveillance.
Australia is a Five Eyes country, with carte blanche access to data that the incumbent social media companies freely share with all the acronym deep-state authorities.
Could you elaborate further on how preventing a sizeable proportion of its citizens from communicating through these established spy-nets, causing them to disperse out to unpredictable alternatives they might not be able to control, increases mass surveillance?
That's definitely an interesting argument I haven't seen before.
I suppose it depends on how effective these types of measures actually are, and also on how many adults refuse to identify themselves. I would assume governments are more interested in spying on adults than under-16s, so the adults are probably more relevant here.
I hope you're right, though. Maybe there'll be a renaissance of smaller platforms. Probably not, but I can hope.
This legislation left it entirely up to the service providers to determine implementation, and so far they don't seem particularly motivated to disrupt my usage by asking me to prove my age.
My suspicion is that fairly simple heuristics of age estimation, combined with social graph inspection, are probably enough to completely disrupt the network effects of "social media" for kids, and achieve the stated objectives well enough that I never have to.
Maybe it turns out that I'm wrong, but why even risk it? If the true policy goal is extending mass-surveillance, why waste so much political capital on such a round-about approach which might yield nothing, or even set back your existing capabilities.
MyID (myid.gov.au) already exists, and could easily have been mandated, or "recommended", or even offered as a means of age verification now. But it wasn't.
Well, no one is suggesting 24/7 surveillance, we’re suggesting banning children from using social media, as it has demonstrably very harmful effects on their education and wellbeing.
It’s not Orwellian. If it were, then not allowing kids to vote or drink before they become adults would be Orwellian.
We are simply banning kids from a harmful activity until they are old enough to decide for themselves. The ban has to be at a social level decided by the democratic process, because there’s a coordination problem here: it’s not a harm that can be remedied at the level of the individual.
The real villains here are the social media companies that have profited from the misery and manipulation of children, to their ultimate harm.
I find it hard to believe anyone would argue in good faith against this ban. In tech circles there are a lot of vested interests that don’t want other governments to protect the children in their countries from harmful products. Shame on them.
You've basically just confirmed what I said at the end, that democracies have no immunity to mass surveillance. 24/7 surveillance may have been an exaggeration but not by much, really. Age verification, as it exists now, inevitably means mass surveillance, in particular tying real life identities to political beliefs and porn preferences on a mass, computerised scale. If you're too young to remember the Snowden leaks I can maybe understand why you'd think mass surveillance is not an inevitable consequence of age verification, but I'm old enough to remember them, so I think it is. The existence and impact of mass surveillance seem to be invisible to you.
> It’s not Orwellian. If it were, then not allowing kids to vote or drink before they become adults would be Orwellian.
To be clear: What do you think you're refuting? I don't think children should be on modern social media. I don't think anyone should be, but especially not children. There are plenty of ways of going about this. This is why I said:
> A lot of the arguments I see in this thread are about whether modern mainstream social media are bad for young people. When the debate becomes about that, it's very easy to defend these types of Orwellian laws. It becomes "This is a problem, therefore the solution is good", without questioning the solution itself.
You then claim that the tech industry, and by extension "tech circles", don't like this because it means they make less money. I'm not sure how forcing companies whose business model is based on surveillance capitalism to do even more surveillance would hurt them, but if it does, it's still not my concern anyway. And conflating random hackers like me with "big tech" seems to have become increasingly common recently.
> It becomes "This is a problem, therefore the solution is good", without questioning the solution itself.
This is a very simplified view. The topic has been disputed for years, and societies has tried to find alternative solutions. But turns out, there is no other well enough working solution at the moment, hence the nuclear option. And sometimes that is the only working option anyway.
Should be noted, this is not a first. Social Media has already been restricted to various degree for kids of certain ages in several countries. Australia is just raising the age from the usual 12, 13 up to 16.
> I find myself agreeing with everything said, and then, suddenly, seemingly out of nowhere, the article pivots to "therefore we need completely 24/7 mass surveillance of everyone at all times and we need to eradicate freedom of speech".
So it's a poor article, so what? These attempts are not new. There are regularly political attempts pushing towards stricter regulations and more surveillance. Some work, some not.
> That article is like a perfect microcosm of this entire international shift in internet privacy.
There is no shift. Those views have always been there, even before the internet. This is a normal part of societies, including democratic. There is a constant power-struggle between control and liberty in any society, and the balance is always shifting depending on how good or bad certain problems are at that moment.
But a certain thing which is missing here BTW is a complete ban of all open media, for everyone in all ages and groups. For Government, kids on social media are not a big problem, that will only bite them in the decades to come. But people now, today, who are getting radicalized against the standing order, those are a problem. And nobody demanding for a ban is good sign for a healthy enough democracy. Because think about in which countries this is not the case..
I believe their point was to illustrate the disconnect between the problem and the solution.
They agree with the problem, and experienced "whiplash" when the solution was described.
> For Government, kids on social media are not a big problem, that will only bite them in the decades to come.
In Australia the kids on social media are a problem for the government, today.
A 16 year old is less than two years away from voting.
Successive governments have laughed at the idea of lowering the voting age to 16 or 17.
The government has very little influence on social media -- this is different to older forms of media / communication.
I think this reflects one of the biggest fallacies behind LLM adoption; the idea that reducing costs for producers improves the state of affairs for consumers too. I've seen someone compare it to the steam engine.
With the steam engine, though, consumers made a trade-off: You pay less, and get (in most cases, I presume) a worse product. With LLMs and other machine learning technologies, maybe if you're paying for the software there's a trade-off (if the software is actually cheaper anyway), but otherwise it doesn't exist. It costs the same amount of money for you to read an LLM-generated article as to read a real one; your internet bill doesn't go down. Likewise for gratis software. It's just worse, with no benefit.
Hacker News is full of producers, in this sense, who often benefit from cutting corners, and LLMs allow them to cut corners, so obviously there are plenty of evangelists here. I saw someone else in this comment section mention that gamers who are not in the tech industry don't like "AI". That's to be expected; they're not the producers, so they're not the ones who benefit.
"This year, the UK also passed a mandate for age verification—the Online Safety Act—"
No we didn't. That was 2023, and it went into effect in multiple phases, the last of which I believe was July 25th this year.
Also, I can't help but wonder what young people now will think of these laws years later, as adults. In the UK, the OSA tries to prevent 17 year olds from watching porn, even though the age of consent here is 16. How will they remember contradictions like that?
I was incredibly surprised to find that this actually is a computer. Normally when you hear about a "computer" constructed in an unusual medium, it turns out to just be a binary adder or an analogue computer. I've learned to expect disappointment.
I don't think the author is wrong for saying that certain kinds of code should be written carefully. I object to the implication that other code shouldn't.
From TFA: "Write your auto-updater code very carefully. Actually, write any code that has the potential to generate costs carefully." So the focus is on code that "generate[s] costs". I think this is a common delusion programmers have; that some code is inherently unrelated to security (or cost), so they can get lazy with it. I see it like gun safety. You have to always treat a gun like it's loaded, not because it always is (although sometimes it may be loaded when you don't expect it), but because it teaches you to always be careful, so you don't absent-mindedly fall back into bad habits when you handle a loaded one.
Telling people to write code carefully sounds simplistic but I believe for some people it's genuinely the right advice.
Logic in Doom is particularly interesting to me. Apparently you can fit ~64k logic gates in a map (using the method described). From [1]:
"As the DOOM engine was not designed to be an interpreter, there are some constraints on our programs written against it. The biggest one is how large our programs can be. Since each gate uses at least one tag, we can use this as a metric to derive an upper-bound on the size of a program. As the DOOM engine uses 16-bit tags, this means we can have, at most, 65535 gates. This is not a particularly large number. We may be able to implement a very small CPU but this limit will be hit pretty quickly I believe."
The z80 had ~8,500 transistors. The 8086 had ~29,000 (checking Wikipedia). You could get far fewer if you use a 1-bit microarchitecture, I'm sure. I think there was a DEC (PDP?) computer that used that trick to have a really low transistor count, but I don't remember what it was called.
The real problem is RAM; for this you may as well cheat and modify Doom's code to add a RAM chip, and I/O while you're at it.
You could create a CPU in Doom implementing an architecture for which a C compiler exists, capable of compiling Doom, and run it in the CPU in Doom. For "reasonable" speed you'd have to do more than one simulation step per frame render (in the host Doom). If you ran it for long enough maybe you could get a full frame of Doom in Doom.
"Stuff which is somehow limited (stack overflows, arbitrary configuration, etc) is still considered Turing complete, since all "physical" Turing machines are resource limited."
In my opinion, worrying about infinite memory, in regards to Turing completeness, makes the task of implementing computation much less interesting.
Also, I'm pretty sure CSS only does one generation (or a finite number of them) before stopping anyway.