Solar doesn't work well with 24/7 demand requirements, provisioning enough storage to fully even out intermittency drastically raises costs (most battery storage systems are for only 2-4 hours).
Nuclear has extremely onerous regulatory requirements.
Since when are warrants required for footage of people in public? Does a red light camera need a judge's warrant before it snaps a photos of a car running the light?
That’s not what I’m disputing, of course. I’m disputing that the grandparent’s assertion that if we (by your stats) simply lock up 1% of the population that violent crime would drop by 60%.
I mean, trivially, using our brains for a nanosecond, what if that 1% of the population is almost always 16-18 year olds when they commit those violent crimes. The 16-18 demographic is roughly 4% of the US population (Google). That would mean locking up 1 in 4 high school students for 6-20 of their most formative years, and thrusting them back into society with a “Mission Accomplished” banner hanging behind you.
Play with the numbers a bit (maybe it’s 1 in 20), but the point stands. Using imprisonment to try to quarantine a demographic that is perceived as irreparably violent is a barbaric, sophomoric idea that has very little evidence of success in the modern era.
Don't jail criminals because maybe they're young, that's your argument? Sounds like a something that's already part of the sentencing policy, leniency of first time offenders.
There are two ideas here - locking up actual criminals and locking up people who happen to fit the pattern of a criminal even without committing any crime. You're arguing against the latter, but I don't think anybody was proposing that.
And, I know those shadow libraries are banned because of copyright, but that's just an excuse. If someone pushes such a broad understanding of Freedom as US does, than copyright should maybe not be the one exception that's ok. People should have freedom to publish anything and other should have freedom to read/play/watch anything. If US can ban something because of so abstract as copyright, why can't EU ban something because of so abstract as `its all lies and state sponsored propaganda` ?
NOTE: just playing devils advocate here, to show the hypocrisy of it all...
Those book "bans" are just librarians' decision on what to use finite shelf space to stock. Students are 100% free to bring any of the "banned" books to school and read them. By this logic, when a librarian changes out an older set of YA novels with a newer set, those older novels are being "banned". So to answer your question:
> Cool, so the US students will be able to read school banned books ?
The answer is "whenever they want."
Furthermore, the CDC's calls for retraction don't prohibit anyone from reading the retracted papers.
Sure, librarians wouldn't know how to do they work, if they didn't get a list on 'not approved' books from the school boards. /s
It's something else if something can't be bought or placed on the shelf because its on some school provided list, and if you (librarian) decide you don't buy it because of (whatever reason).
The same with research, if something is not published, or funding on research is stopped because `we know climate change doesn't exists`, that no one can read it, because its not even created.
But who cares, its useless debate...
On the one hand, micro mobility is a great way to reduce emissions, traffic, and parking congestion. It'd really suck to see it become more difficult to get around with electric bikes and scooters.
On the other hand, so many people I know are riding personal electric vehicles capable of going 25+ MPH that don't even know the basics of handling a two wheeled vehicle. They've never even heard of the phrase "counter steering". Fort9 did an experiment and found the the typical city electric bicycle and motorcycle commutes have about the same average and peak speeds.
Id really like it if cheap, accessible courses like the ones conducted by the Motorcycle Safety Foundation were required to operate an EV over a certain power threshold, maybe 300W. Though this additional barrier to entry probably would reduce the adoption of PEVs, unfortunately.
As bad as I think this law is, this isn't demanding any degree of surveillance in the sense that real human beings have their information or activity tracked. This is mandating taking down content, not surveiling anyone.
> This is mandating taking down content, not surveiling anyone.
As far as I understand, it precisely mandating to monitor EVERYONE.
They are not talking about removing a specific image from the platform based on its hash or something. They are talking about actions that involve automated analysis of all content on the platform for patterns arbitrarily specified by the government.
The technologies discussed differ from totalitarian surveillance by simply toggling a single flag on the platform, and are indistinguishable from such surveillance for the user.
Imagine dang wrote a script to delete every HN comment that contains the string "velociraptor". Under your logic, this involves surveiling every HN commenter. This is true in the pedantic sense that every comment posted to the site would be checked for "velociraptor".
But most people understand the word "surveillance" to mean more involved information collection than just deleting content that matches certain criteria.
What qualifies as an "intimate image"? A photo of someone in a swimsuit at the beach?
Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?
If you envision yourself as a potential victim of such content, I think the answers to these questions all become pretty obvious. A swimsuit photo might or might not be intimate, depending on what kind of swimsuit it is and the context in which the person posting the photo is presenting it. A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
> A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
What if someone claims to feel violated by an image of a person that looks totally different: different skin color, different build, different facial structure, etc?
Then they'll try to convince the authorities that the image is of them, and presumably mostly fail although in some cases they may succeed. (If you're worried about something like the US DMCA, that's almost certainly not going to be the proposal; the UK has a number of existing 48 hour takedown policies, and they all involve orders from the authorities rather than self-certified requests from random third parties.)
Presumably the same way they investigate revenge porn complaints today? I don't mean to be obtuse, I understand your questions are rhetorical, but I don't see what it is you're gesturing towards.
In the case of revenge porn, there's an image of a real person. By contrast, fictional content doesn't affect actually show a real person, so any attempt to prohibit fictional revenge porn must target images that merely look similar to a person. What degree of similarity is required to qualify? That dimension isn't a factor in real revenge porn.
Perhaps you're missing the context? In the incidents which led to this proposal, no judgment of similarity was necessary, the sexualized images were posted in the replies to non-sexualized images of the same person.
This isn't really a novel dimension in the first place, I don't think. It's just rarely an issue in practice, because most people who post these images do so to shame and embarrass the depicted person. No doubt there will be edge cases where a sexualized image of consenting person A gets taken down because they look similar to non-consenting person B - but is that really a big problem?
The law doesn't stipulate that the offending images have to be posted with the intent to shame or embarrass, nor that the images have to be sent directly to the person that's supposed to be depicted in the image. If that's the justification, then the legislators ought to have put wording to that effect into the law.
As you point out, this is a proposed amendment to an already existing law.
> The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
Unless I'm mistaken, CSAM is prohibited entirely in the UK, not just in replies to the child depicted in the abusive imagery. They explicitly say that they intend to make fictional intimate content allegedly depicting a real person to be treated the same way as CSAM.
There's nothing that suggests to that this new amendment prohibiting fictional content is going to be narrowly scoped to replies to the people allegedly being depicted.
Another, related issue is that the takedown mechanism becomes a de facto censorship mechanism, as anyone who has dealt with DMCA takedowns and automated detectors can tell you.
Someone reports something for Special Pleading X, and you (the operator) have to ~instantly take down the thing, by law. There is never an equally efficient mechanism to push back against abuses -- there can't be, because it exposes the operator to legal risk in doing so. So you effectively have a one-sided mechanism for removal of unwanted content.
Maybe this is fine for "revenge porn", but even ignoring the slippery slope argument (which is real -- we already have these kinds of rules for copyrighted content!) it's not so easy to cleanly define "revenge porn".
DMCA isn't directly that bad. DMCA is under penalty of perjury, so false take downs are rare.
The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty. Though if it ever happens to you I suspect you have a good case against whoever did this - but the lawyer costs will far exceed your total gain. (as in spend $30 million or more to collect $100). Either we need enough people affected by a false non-DMCA take down that a class action can work (you get $0.50 but at least they pay something), or we need legal reform so that all take downs against a third party are ???
> DMCA is under penalty of perjury, so false take downs are rare.
Maybe true with the platonic ideal "DMCA takedown letter" (though these are rarely litigated, so who really knows), but as you note, they're incredibly common with things like the automated systems that scan for music in videos (and which actually are related to DMCA takedowns), "bad words" and the like.
> The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty.
It's true that most takedowns in the US aren't under DMCA, but even that once-limited process has metastasized into large, fully automated content scanning systems that proactively take down huge amounts of content without much recourse. Companies do this to avoid liability as part of safe harbor laws, or just to curry favor with powerful interests.
We're talking about US laws here, but in general, these kinds of instant-takedown laws become huge loopholes in whatever free speech provisions a country might have. The asymmetric exercise of rights essentially guarantees abuse.
I believe google issues legitimate dmca takedowns for copyright strikes, even when there is no infringement. They put the work to defend the strike on the apparent commiter, often with little to no detail.
While the false takedown may be rare. Using dmca as a mechanism to inflict pain where no copyright infringement has taken place is indeed common enough that it happens to small time youtubers like myself and others I have talked to.
Companies that exist in the US don't have to obey EU laws. For instance the UK tried to tell 4chan that it needs to obey the UK Online Safety Act, and 4chan replied with, essentially, "fuck off".
Companies that try to do business in the EU have to follow EU laws because the EU has something that can be used as leverage to make them comply. But if a US company doesn't have any EU presence, there's no need to obey EU laws.
> average citizens can be held responsible for the actions of their government, to the point that they are valid military targets.
What do you mean by this? If a government conscripts "average citizens" into its military then they become valid military targets, sure.
I'm not why you think this implies that developers working for Palantir or Clearview would become military targets. Palantir builds software for the military. But the people actually using that software are military personnel, not Palantir employees.
You're trying to find claims that aren't there, they are explicitly saying that "certain people" (which may or may not include the original poster) think that deliberately killing civilians is fine if their government is bad enough. They then go on to rhetorically question if those same "certain people" would find terrorist killings of tech workers who work at companies they don't like justified because they help the US government, even if it's in a purely civilian context.
Nuclear has extremely onerous regulatory requirements.
reply