Or maybe we should stop the propaganda arm of the US fascists distorting the reality around here and inventing needs that our population doesn't actually need so that they are pushed towards far right parties?
Or how about making sure the corrupted US society do something about them messing up the world economy because rich people want to be richer and so they bought their governments through once again their propaganda arms of all the social media and news corporations they bought?
What about the parts where the US would bomb constantly the ME thus making the people living there want to move out. But of course they won't go to the place that bombed them, especially since there's a whole ocean between them so instead they come to us in the Europe. Oh and if it's not bombs, it's global warming anyway, another thing the current US government pushes hard for.
The reality is that it's probably mostly about incompetence and lazyness. And tight purses of course.
When some country parliament decides to mandate age verification, it's really easy and lazy for them to just say "verify ages and do it reliably!" and stop at that. As a result, the services affected don't really have a choice to go towards those horrible solutions of scaning our faces or IDs or both and all the issues around that.
What should happen rather is "we'll build a system for age verification that is privacy focused and all you sites that are adult only will have to use it or any other system we deem acceptable" But this requires effort on the part of the government doing the law. And money.
Yes, wellknown solutions have a strong grip on you. In this case it is unfortunate.
I know that the preparations for eIDAS 2.0 (the European thing) did contain many good parts. Taking inspiration from SSI. So at that point their specifications were rather good. I havent kept up with it for almost one year now, so that might have changed.
I thought the point (which the article misses) is that a token gives you an identity, and an identity can be tracked and rate limited.
So a crawlers that goes very ethically and does very little strain on the server should indeed be able to crawl for a whole week on a cheap compute, one that hammers the server hard will not.
In a similar vein, I remember people advocating for replacing new untrained hires with AI. After all, a competent senior engineer is needed to validate the contributions of the new hires anyway and they can do the same checking the AI code.
But then, how would you even train and replace those competent seniior engineers that do the filtering when they retire? The whole system was predicated on having a chain of new hires that gain experience in the process.
From what I could perceive, companies believe coding AIs will eventually learn to both code and teach better than seniors.
This is based on two assumptions:
- AI will get better. Developers using the system will transfer their knowledge to it.
- Seniors in a couple of years will be different. They should be those who can engage with the AI feedback loop.
Here's why I think it won't work:
- Senior developers learn more than they can produce. There is knowledge they never transfer to what they work on. Internalized knowledge that never materializes directly into code. _But it materializes indirectly_.
- Senior developer knowledge come from "schools", not just reading. These schools are not real physical locations. They're traditions, or ideas, that form a very long tail. These ideas, again, are not directly transferrable to code or prose.
- Juniors get embarrassed. You say "stop making this nonsense", and they'll stop and reflect, because they respect seniors. They might disagree, but a pea was then placed under their matress, and they'll think about "this nonsense" you told them to stop doing and why. That is how they get better. So far, AI has not demonstrated being able to do that.
The production of quality content is an aspect of one of those "schools of thought". You are supposed to bear the responsibility of passing the knowledge. Keeping lean codebases easy to understand is also a hallmark of many schools of thought. Working from fundamentals is another one of those ideas, etc.
Epic did say that you might in some situations forego normalmaps with Nanite and save disk space even though you have super detailed models so it DOES fit in this context.
Also, video games are used to take a high poly model and bake a normalmap corresponding to it on a lower poly model anyway so it might also be used that way. I think Doom 3 was the first game to show the technique?
With nanite normal maps are less required than otherwise because the detail is preserved in the mesh. You could make the argument that micro detail normal maps are still useful but those aren't always generated from the mesh. Especially if they are tiling.
As if that would even have any effect in that situation. No amount of audits and rules would prevent TikTok from collecting data and manipulating the public opinion.
Why not monitor it? Create thousands of read-only accounts that "prefer" content with all kinds of ideological viewpoints and statistically analyze whether the algorithm is being biased to promote certain viewpoints. I'm not smart enough to implement something like that but it sounds like a solvable problem to me.
I thought about this too. In no way do I suggest it's an actual solution, but I wonder if some kind of reporting could be used as leverage to help appease US leaders towards a solution that doesn't require banning the app or handing it over to them.