Hacker Newsnew | past | comments | ask | show | jobs | submit | senshan's commentslogin

I do not see how this is an argument. If porn can be narrowly targeted, why apps can not be targeted narrowly as well?

It seems to be more about harmonizing Texas law (SB2420) under the constraints of federal law (1A), so we will likely to see this question all the way to the USSC.


Porn is a category; apps are a concept

Like age laws for vape pens vs age laws for shopping.


"If porn can be narrowly targeted, why not books?"

You cannot narrowly target a medium.


> "The Act is akin to a law that would require every bookstore to verify the age of every customer at the door"

Presumably for the same reason why libraries can not be targeted narrowly


Apps aren’t a narrow target

If the judge finds that apps and books are so equivalent, then letting the apps require age verification should do no harm -- everyone underage or privacy-concerned will simply go to the bookstore or a library. Right?

Apparently, these are not quite equivalent. Like books and weapons, like books and alcohol, etc.


The equivalence is that children have first amendment rights (see Tinker v Des Moines) and speech delivered by the internet is still speech.

Good point, but judge's reduction it to a book equivalence is misleading and weakens the judgement.

Porn may provide a suitable model: not all movies need age verification, so those can be viewed at any age. Some movies, however, do require age verification. Similar age ratings could be applied to apps. For example, Facebook only after 18 regardless of parent's approval.


> judge's reduction it to a book equivalence is misleading and weakens the judgement

Good thing that isn't what happened. It is called an "analogy" and is not a factual statement of equivalence.


There is no law that mandates age verification for movies, any type of rating, or preventing anyone from watching any movie.

The MPAA rating system and adhering to it is completely voluntary.


Porn has always been treated differently than other speech that is why most age verification laws want for it first. As for your other examples those are all technically voluntary, as it’s unlikely a government mandate that nobody under 17 can watch an R rated movie would pass constitutional muster. Parents can restrict what speech their kids say or hear but the government generally cannot in the US.

> Parents can restrict what speech their kids say or hear but the government generally cannot in the US.

Good in theory, but practically impossible. Peer pressure is too high for parents to be a significant barrier. If you were successful, please share how you did that.


The question isn't whether your or my proposed regime is practical. The first amendment precedent is clear that the government is not allowed to restrict children's speech any more than it is adults' speech aside from some narrow and tailored exceptions.

Right. So SB2420 and the federal court judgment are the steps in the process to narrowly tailor another exception. Likely driven by the practical reasons mentioned earlier.

"Cannot" in the US means no route to enforcement in that context. Distribution of NC-17 content to minors was never directly illegal, but doing so anyway would open the door for potential legal issues under the more broad umbrella of laws that cover "distribution of lewd or obscene content to a minor" which is more of a "do so and find out" concept of enforcement versus specifically identifying NC-17/X content by law.

> If the judge finds that apps and books are so equivalent, then letting the apps require age verification should do no harm -- everyone underage or privacy-concerned will simply go to the bookstore or a library. Right?

That is obvious harm.


This is only an obvious lack of equivalence

I have no idea what you're on about but the point is this chills speech, and infringes on the rights of everyone involved, not just underage people.

Did it impact his quality of life? How?

Very interesting blog post, but...

At the age of 29 he wrote a self-help book. The most fascinating part is that the general public took it so enthusiastically and so seriously.

Really? Wisdom dispensed by a 29 years old? This aspect of general public keeps me amazed over and over again.


It's not a bad book. https://www.amazon.com/Hour-Workweek-Escape-Live-Anywhere/dp...

It's mostly about starting a small business by someone who'd started a small business selling nutritional supplements.


I have my doubts: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

https://en.wikipedia.org/wiki/Categorical_imperative


As many pointed out, the purpose of peer review is not linting, but the assessment of the novelty and subtle omissions.

Which incentives can be set to discourage the negligence?

How about bounties? A bounty fund set up by the publisher and each submission must come with a contribution to the fund. Then there be bounties for gross negligence that could attract bounty hunters.

How about a wall of shame? Once negligence crosses a certain threshold, the name of the researcher and the paper would be put on a wall of shame for everyone to search and see?


For the kinds of omissions described here, maybe the journal could do an automated citation check when the paper is submitted and bounce back any paper that has a problem with a day or two lag. This would be incentive for submitters to do their own lint check.


True if the citation has only a small typo or two. But if it is unrecognizable or even irrelevant, this is clearly bad (fraudulent?) research -- each citation has be read and understood by the researcher and put in there only if absolutely necessary to support the paper.

There must be price to pay for wasting other people's time (lives?).


Very good analogy indeed. With one modification it makes perfect sense:

> And as the remedy starts being applied (aka "liability"), the enthusiasm for sloppy and poorly tested software will start to wane.

Many of us use AI to write code these days, but the burden is still on us to design and run all the tests.


> I never give the second to my LLM.

How do you practically achieve this? Honest question. Thanks


Custom scripts.

1. Turn off 2. Code 3. Turn on 4. Commit

I also delete all llm comments they 100% poison your codebase.


>> 1. The raw code with no empty space or comments. 2. Code with comments

> 1. Turn off 2. Code 3. Turn on 4. Commit

What does it mean "turn off" / "turn on"?

Do you have a script to strip comments?

Okay, after the comments were stripped, does this become the common base for 3-way merge?

After modification of the code stripped of the comments, do you apply 3-way merge to reconcile the changes and the comments?

This seems a lot of work. What is the benefit? I mean demonstrable benefit.

How does it compare to instructing through AGENTS.md to ignore all comments?


Telling an AI to ignore comments != no comments that's pretty fundamental to get my point.


>> 1. The raw code with no empty space or comments. 2. Code with comments

> 1. Turn off 2. Code 3. Turn on 4. Commit

So can you describe your "turn off" / "turn on" process in practical terms?

Asking simply because saying "Custom scripts" is similar to saying "magic".


Can you please provide a few examples to get the gist of such trend?

Honest question.


Was not it: "do one thing, do it well"?



In the book "How Big Things Get Done" [1], Bent Flyvbjerg, among other things, identifies one common feature of the projects that do not have large outliers to go over-budget and under-deliver: modularity. Ideally, fractal modularity. His favorite examples: solar power, electric transmission, pipelines, roads. Ironically, IT/software is only slightly better than nuclear power and Olympic games [2].

[1] https://www.amazon.com/-/en/dp/B0B63ZG71H

[2] https://www.scribd.com/document/826859800/How-Big-Things-Get...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: