Blag

Life 3.0

I’m not sure where I first heard about this book, but I dove into it after finding it hidden on my book list. Max Tegmark dives deeply into the unstudied abyss of AI ethics and endgame scenarios, much of what was absolutely fascinating to read about. Life 3.0 doesn’t require a strong background in computer science or current physics, since the author does a pretty great job on giving examples requiring only basic familiarity with the commonplace structures of today’s society.

Even without the fundamental backing of math or CS, Tegmark poses some fascinating questions (and some interesting answers) about man’s relationship to artificial intelligence as its ability grows. Here are some of the coolest takeaways that I got from the book.

A prominent “technological singularity” is a point where the smartest AI built is smarter than a human; at that point, it can recursively improve itself extremely quickly to become super intelligent beyond anything a human could ever create. At this point, it’s important for the goals of the AI to match almost exactly with those of a human. Otherwise, existential mayhem can result in the AI attempting to maximize for a particular set of goals that may lead to the destruction of humanity or other types of undesirable situations. Goal setting is hard, however, since an AI’s inherent need to build a stronger model of the world may potentially rewrite the goals that it has. Case in point: a human’s primary goal of procreation can be overridden pretty easily, due to emotions and a stronger world view.

Interacting with superintelligent beings is frightening. There’s a proposal for a “viral” being to simply beam itself as information to unsuspecting civilizations, which then build a model of the thing being transmitted to them. That model, being a manifestation of an AI, can take over the resources of the planet / star system to simply build more transmitters to broadcast the message out. A cosmic virus of galactic proportions. Likewise, interacting with human built ones can go awry as well. It’s possible to make an AI that maximizes human happiness, but that drives people away from the fundamental drive of pushing forward to make progress. Stoically pleased with life doesn’t seem like a good way to go out. Or in other cases, moving forward without a superintelligence means that humans can never really reach their potential. If the universe moves forward with the silicon spawn that we’ve generated, is it okay for humanity to end?

It’s difficult imagining scenarios where an AI and humans can interact prosperously while still driving the human race forward into exploring the cosmos. I remain excited for the future regardless, since I think this is something that can be explored during my lifetime.

Comments

comments powered by Disqus