Back to Blog
Paths dangers strategies5/30/2023 ![]() ![]() ![]() It’s an absurd idea, but it shows the possibility of inadvertent damage. Then it turns its gaze towards the stars and thinks, ‘Well there’s an awful lot of planets out there and I can turn all those into paperclips!’ So it develops a space program and travels around the cosmos and turns the entire universe either into paperclips or something that makes paperclips. After a little while, it realizes, ‘Well these humans, they’re made of atoms, they could be turned into paperclips.’ So it turns us all into paperclips. The AI has the goal of maximizing the production of paperclips. ![]() He warns about the possibility not so much of a superintelligence going rogue-like Skynet, or HAL in 2001-but more simply of an immensely powerful entity that would not set out to damage us but have goals that could do us harm…He uses what he calls a ‘cartoon’ example: The first AGI turns out to be developed by someone who owns a paperclip manufacturing company. If we get AGI, the outcome could be absolutely wonderful. He’s said, for a long time, that Kurzweil is half right. “He is a philosophy professor at Oxford University and runs the Oxford Martin School’s Future of Humanity Institute. ![]()
0 Comments
Read More
Leave a Reply. |