My first direct app is similar. The "Generate a security code" button only appears on the login screen, not after you've logged in. It used to be covered by a pop up. Now when I open the app it auto logs me in with face ID and I have to actively log out to get to it. The code in my case is used to log into their website. The first time I tried I had to phone them and say "eh? how's it work?"
The author perhaps pessimistically seems to think AI will benefit some privileged groups and hurt others but it could turn out to be something like clean water and sanitation that benefits almost everyone. I don't think it has to be bad.
I did read and I'll give you my comment wasn't that in line with the article. I'm not that convinced by the optimism being a class privilege thing though.
Im 2000 and 2008 dot com stocks and house prices stopped going up and fell, new investor money dried up and companies that relied on it or homeowners relying on new finance went bust. Tech advances and housing went on though.
I guess similarly in an AI bust some companies that rely on new investor money would go bust but things would go on.
Regards the mechanism, it seems ~35% of people get the shingles vaccination and you can imagine those are the people making most effort to live healthy and take precautions?
I had shingles aged ~60. I wonder if that gives any protection?
What's happening with AGI depends on what you mean by AGI so "can't even be bothered discussing what the definition" means you can't say what's happening.
My usual way of thinking about it is AGI means can do all the stuff humans do which means you'd probably after a while look out the window and see robots building houses and the like. I don't think that's happening for a while yet.
Indeed: particularly given that—just as a nonexhaustive "for instance"—one of the fairly common things expected in AGI is that it's sapient. Meaning, essentially, that we have created a new life form, that should be given its own rights.
Now, I do not in the least believe that we have created AGI, nor that we are actually close. But you're absolutely right that we can't just handwave away the definitions. They are crucial both to what it means to have AGI, and to whether we do (or soon will) or not.
I'm not sure how the rights thing will go. Humans have proved quite able not to give many rights to animals or other groups of humans even if they are quite smart. Then again there was that post yesterday with a lady accusing OpenAI of murdering her AI boyfriend by turning off 4o so no doubt there will be lots of arguments over that stuff. (https://news.ycombinator.com/item?id=47020525)
It's obvious to humans because we live in and have much experience of the physical world. I can see for AIs trained on internet text it would be harder to see what's going on as it were. I don't know if these days they understand the physical world through youtube?
reply