A few weeks ago I made the comment: The OAI soap opera marks the point where Silicon Valley disappeared up it’s own bum. Which, I guess, is a singularity of sorts.
I was not the only one to see the parallels…
The consequences are what you might expect when a crowd of bright but rather naive (and occasionally creepy) computer science and adjacent people try to re-invent theology from first principles, to model what human-created gods might do, and how they ought be constrained. They include the following, non-comprehensive list: all sorts of strange mental exercises, postulated superhuman entities benign and malign and how to think about them; the jumbling of parts from fan-fiction, computer science, home-brewed philosophy and ARGs to create grotesque and interesting intellectual chimeras; Nick Bostrom, and a crew of very well funded philosophers; Effective Altruism, whose fancier adherents often prefer not to acknowledge the approach’s somewhat disreputable origins.
All this would be sociologically fascinating, but of little real world consequence, if it hadn’t profoundly influenced the founders of the organizations pushing AI forward. These luminaries think about the technologies that they were creating in terms that they have borrowed wholesale from the Yudkowsky extended universe. The risks and rewards of AI are seen as largely commensurate with the risks and rewards of creating superhuman intelligences, modeling how they might behave, and ensuring that we end up in a Good Singularity, where AIs do not destroy or enslave humanity as a species, rather than a bad one.
Leave a comment