Pei Wang on the Path to Artificial General Intelligence.
Ben Goertzel interviews Pei Wang. The interview does contain some interesting questions. It sounds as if Ben believes in a kind of “embodied inteligence”. However, it also seems to me that he does not feel obliged to distinguish human-level and human-like intelligence. The concept of general intelligence, trivially, includes human-like and human-level intelligence, but also trivially human-level intelligence includes human-like intelligence. In fact, I would be surprised if the set of human-like intelligences did not turn out to be very small compared to the set of human-level intelligences. It also seems that both Ben and Pei assume the AI must be autonomous, and needs an “environment” to reach human-level intelligence, which I disagree with.
Pei mentioned the Turing Test. The version that he refers to is not the original Turing Test, but the “extended” or “embodied” Turing Test. The true Turing Test happens over a teletype. I find that sufficient because most relevant bits of human-like intelligence can be expressed in human language, I think. What comes to mind is that text isn’t a good way to represent “a dancing performance”. It is much more natural to view a dancing performance than talking about it, and surely as Turing expects that the machine intelligence may appreciate poetry and its artistic content, it should also appreciate dancing. Since visual processing takes a good portion of our brains, I think it as too much of advanced thinking to miss out from a test for human-like intelligence, which seems roughly what the Turing Test is. On the contrary, there is something we might call “Universal Intelligence Test”, that assumes no cultural preference (That’s something I should write about later when I have some time for actual writing). That is infinitely harder than conducting a Turing Test of course. What the Extended Turing Test provides on top of that is, measuring actual physical performance, for instance the judge might ask the player to stand up and perform a dance movement suitably to a piece of music, after watching the video of a certain dance pattern. This is a very common kind of bodily intelligence, as it is basically physically simulating another person. Pei Wang comments in a way on this issue: (talking of an AI that does not simulate a human body) “whether such a system is still “intelligent” depends on how the “human-like” is interpreted.“. I would have liked Pei to comment also on the original Turing Test and the Turing Test with sensory problems, in addition to the extended Turing Test.
I liked Pei’s answer to the question about integrating perceptual and conceptual levels, however, I think it would be sufficient to just say that they integrate at a basic level of computation, i.e. they can be integrated in a variety of ways in any flexible enough system. Furthermore, the problem of discovering appropriate representations, itself, is an AI-complete problem. Of course I agree that all AGI would ultimately have to work with sensory data, or it would remain detached from the world, even if it is not autonomous (e.g. we should be able to ask the AGI to scan through pictures and find something of interest, just as we could ask a man). If you have to show much extra effort to bind them, perhaps you have not made the modules general-purpose enough. This, I think, might stem from the fact that most AGI systems will have to start off with a lot of ad-hoc designs to make those sorts of translations. Yet the final version should not have much ad-hoc in it.
I truly appreciate both Ben’s and Pen’s efforts in building a community. When I started thinking about this problem, there wasn’t much of a community so I had to build it myself. The lack of research funding is a big problem. Right now, all sorts of investors should be knocking on our doors if they have any sense of investment, but they prefer to fund redundant “computer vision” or robotics projects, i.e. narrow AI. Some of my friends have even been sending me links of such narrow AI projects, which I am having to politely thank them for, and then trash. A big problem as I see it, is that the researchers are scared of this problem, they find it so challenging that they think it is impossible, and then if you are doing it, they would think you are not good enough to pursue it, of course not having understood that anyone working in this field has probably already devoted a large portion of his life to it. A theoretician colleauge, perhaps having learnt about Solomonoff induction at one point in his life, voiced his opinion that it would never work in real-life, all after I had been showing him experimental results. Life can be confusing for people. I have my own ideas about convincing funders, however, I think we should be spending more effort to change the public perception of true AI research. What happened to that community site? 🙂 And demos, that’s right 🙂
The certainty of Pei Wang about when to reach human-level intelligence surprised me, too, however the time frame itself is not unrealistic in my opinion!