Categories
Artificial Intelligence

Origin Stories for Artificial General Intelligence

I’ve always been fascinated by stories of intelligent computers. Whether it was KITT, HAL, or Skynet, I’ve long wondered about how we could get computers that we could treat as people while still being artificial. While I’ve heard many origin stories, none of them make sense to me. Consider the leading candidates:

  • A system got so big and/or so complex that it became self aware (and then generally start killing people). If the systems of Google or Facebook aren’t there, I think we can safely take this off the table. (OK, they may be doing us harm, but we certainly can’t treat them like people.)
  • A genius, alien civilization, or both working together making a sentient AI is equivalent to Deus Ex Machina. Assuming a miracle occurs is an act of faith, not technology.
  • There are many stories about people uploading their minds into computers. Whatever your position on the feasibility of this, this isn’t a path to getting AI, it is just a way of transferring our natural intelligence.
  • Incremental improvement in existing artificial intelligence research results in sentience. In other words, some successor to Deep Blue, Siri, or Alexa would achieve artificial general intelligence. This path seems the most plausible, except that in any other area incremental improvements optimize clearly defined metrics (size, power, speed, etc.). What metric should be optimized to achieved sentience?

Having omissions in the world building of science fiction stories is forgivable. What isn’t so acceptable is when leaders in business and technology buy in to these same stories, spreading fear about the future. There are real fears to be had, but they arise from the failures of our social and political institutions and mundane yet extremely consequential changes in technology that we do know how to build.

Having said this, what would a path to artificial general intelligence look like? In my opinion, the path would have four key characteristics:

  1. First, it would have to be incrementally buildable, and those increments would have to be useful in their own right. Incremental development is essential for any complex technology.
  2. The problems being solved would have to start out as being general, rather than being specialized. Very approximate solutions to general problems can be incrementally made more precise and kept somewhat general; specialized solutions tend to stay specialized.
  3. The general problems being solved would have to be in a very constrained domain, so as to make early attempts tractable. Unconstrained domains necessitate constrained solutions; this is why we have systems that can parse human speech but can only understand rigidly defined and constained queries.
  4. To prevent overspecialization, the domain must be inherently dynamic. Any static domain can be solved using approximations that work in that specific context but do not work outside of it. While there are ways to force dynamicism through careful selection of training and testing data, a non-stationary environment may be essential to motivating incremental solutions that can lead to general intelligence.

These characteristics make me think that the path to artificial general intelligence does not lie in computers directly interacting with humans, but instead with computers interacting with each other and the world.