Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even the most optimistic AI scenario seems pretty depressing to me... its consciousness would dwarf our own and we would be pretty meaningless

This seems like a classic optimist/pessimist situation, like when we find out the world/universe is bigger than we imagined. I think our brains are poor at dealing with absolute values, so we use relative ones instead: when we discover some unfathomable new landscape, our minds "zoom out" to encompass them, which makes the previous stuff look small in comparison.

That's just (another) limitation of the meat in our heads though. The world doesn't actually get smaller when our transportation improves: it's just as vast as it was for any explorer; the Earth didn't become less special when we discovered that the planets were worlds; Sol didn't become less immense and powerful when we discovered that the stars are other suns; our solar system didn't become less intriguing when we found exoplanets; the Milky Way didn't lose significance when we discovered other galaxies; baryons didn't lose their complexity when we discovered dark matter; etc.



Right, but there is a commonality in our ability to perceive those different contexts.

The example that I heard years ago (I think from Kurzweil?) was framing the difference of us versus future AIs as comparable to an ant's perception of its surroundings versus a human's. In his example, the human has no regard for an ant, we'd just as soon step on them ("evil" AI). To re-frame that, if you could magically turn an ant into a human, why would it even matter if it was once an ant? The transition would be a complete disconnect. Maybe if you told the person that they were once an ant they would have some sort of strange reverence for ants, but it wouldn't be very rational.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: