I'm not an AI expert or even novice nor am I a neuroscientist, but I have been thinking about how I interact with the world.
My current imagining says that novelty and unexpected inputs drive our immediate understanding of the world around us. To have expectations you have to have to have a model. When that model breaks and is adjusted you have a novel experience and the model can be updated. This feedback loop is critical.
Example: other day I was grilling food and my digital food thermometer was on the metal prep area near the hot griddle. As I was walking away I reached for it, grabbed it, and expected to pick it up. However! I didn't know it had a magnet and it gave me back unexpected stimulus.
I immediately jerked my hand away and several thoughts happened near instantly. My thoughts went from I burned my hand to no, no pain, maybe a really bad burn, to no, no heat, no sizzling of flesh, to oops, wrong stimulus, something resisted, resisted how, it slid but wouldn't pick up easy, ah, a magnet.
The researchers here are right, I expect. You need curiosity and some goal, but you need to constantly tune the input for expectations and tweak the (mental) model of the world.
How many times do you, for a split second, totally misinterpret what you see or feel but near instantly self correct? Better AI will require putting forth it's initial result and then validating the result with feedback. The more unexpected the feedback the more novel the experience and more learning that can happen.
My current imagining says that novelty and unexpected inputs drive our immediate understanding of the world around us. To have expectations you have to have to have a model. When that model breaks and is adjusted you have a novel experience and the model can be updated. This feedback loop is critical.
Example: other day I was grilling food and my digital food thermometer was on the metal prep area near the hot griddle. As I was walking away I reached for it, grabbed it, and expected to pick it up. However! I didn't know it had a magnet and it gave me back unexpected stimulus.
I immediately jerked my hand away and several thoughts happened near instantly. My thoughts went from I burned my hand to no, no pain, maybe a really bad burn, to no, no heat, no sizzling of flesh, to oops, wrong stimulus, something resisted, resisted how, it slid but wouldn't pick up easy, ah, a magnet.
The researchers here are right, I expect. You need curiosity and some goal, but you need to constantly tune the input for expectations and tweak the (mental) model of the world.
How many times do you, for a split second, totally misinterpret what you see or feel but near instantly self correct? Better AI will require putting forth it's initial result and then validating the result with feedback. The more unexpected the feedback the more novel the experience and more learning that can happen.