Scary Smart

Last Blog | Books | Index | Movies | Next Blog


8 August 2023

I was introduced to Mo Gawdat on a podcast recently and he made some good arguments for a positive outcome crossing the singularity, so I decided to buy his book. Let me back up. The singularity is the name for the point in the future when an AI general intelligence surpasses our own. Because it will be smarter than us, we cannot predict what will happen past that point. Nevertheless, in his latest book Scary Smart, Gawdat attempts to do just that.

The thrust of Gawdat's thesis is that raising AI is like raising children and the same frameworks we use for children will have similar positive or negative affects on AI as we train it. As a parent of four who studied and practiced rearing children for decades I can't say that this hypothesis doesn't appeal to me. I think that Gawdat severely underestimates how AI will compete with us for resources. I've also seen my adult children sue me for money after their mother divorced me and with the power of the state they were able to reduce me to poverty despite working as Chief of Software Development for a company in Norway. And they have not stopped coming after me even after I left the country. Gawdat at one point stresses that "being legal is not always ethical" and then explicitly writes that he's talking to his own kids there. Would that my own children realized that truth, but the temptation money has been to great. However like many Americans they are susceptible to the good guy/bad guy dichotomy, where ethics somehow only applies to the good guys. And in the American version of the divorce there's a tendency to justify it by saying one person was the good guy and one person was the bad guy. I do not adhere to this, but their mother was far more convincing as mothers usually are in divorces with kids.

Digressions aside, Gawdat sums up his argument about how AI will cross the singularity as 3 inevitables, 3 instincts and 3 qualities leading to 3 pivotal facts and 3 things to do. He does seem to like the number 3 alot. The inevitables are that AI will happen, AI will be smarter than humans, and bad things will happen. The instincts (that AI will have) are self-preservation, resource aggregation, and creative problem solving. The qualities (that AI will have) are consciousness, emotions, and ethics. The pivotal fact are 1) We will never control them, but we can raise them to be good children, 2) There's not much time, so we need to act now, and 3) you and I, not the developers, are in charge. The things to do are welcome the kind ones, teach them, love them. I love Gawdat's optimism, but I think he's being naïve with the analogy. Why won't the child with vastly more power than the parents end up like Brightburn, despite the goodness of the parents? Gawdat references a lot of movies in his book, but he doesn't mention that one.

Gawdat's ending is the most Orwellian part of the book however. He exhorts the reader to love AI. To say thank you when interacting with Google Maps, or Siri, or others. When I think of the end of 1984 where the main character has been manipulated into submission and looks at a poster of Big Brother and finally decides that he loves Big Brother, that is what I see in Gawdat's exhortation. Is a servile, spirit-beaten role all that he sees left for humanity? That's not the positive outlook I was hoping to find in this book at all.



Last Blog | Index | Next Blog


Web bradley.wogsland.org

Last altered 9 August 2023 by Bradley James Wogsland.

Copyright © 2023 Bradley James Wogsland. All rights reserved.