Artificial General Intelligence (AGI)

Nobody asked but …

We don’t even know where to start, but the idea of making machines like humans is bonkers.

There at least 3 disparate ways to go. Shall we replicate the way that the human brain works? Shall we fool disinterested observers into mistaking a machine for a human? Shall we quit playing games and get on with a peaceful coexistence between homo sapiens and the machines that she builds?

Ever since Alan Turing developed the concepts underlying an electronic machine, we have been citing his test. That is that Turing said we will have achieved artificial intelligence when a human could interrogate a machine, but be unable to tell it from a human. Are we talking about a human and/or a machine with the knowledge equivalent to Turing himself? Lots of luck on that. The human species may never have another Turing-level intellect.

If we (or the advanced machines) build machines that replicate the way that a human mind works, how do we know that the process will produce anything even remotely like human behavior. And furthermore I remain unconvinced that we have understood the workings of the mind. I do know that we have learned that the human mind, functioning as it does by trial and error, produces far more error than we would tolerate in a “thinking” machine.

If, on the other hand, we try to simulate machine-like infallibility, how will the machine perform the necessary trial-and-error to arrive at scientific findings?

It is my opinion that we will all be better off letting humans and machines, respectively, do what they do best.

— Kilgore Forelle