I have maintained for ages that terminological and polemical discussions are fruitless and therefore fit for fools. However, every now and then we are being forced into such trivial controversy.

I will reminisce an old foe from the glorious days of comp.ai.philosophy, the invincible David Longley who had a knack for endless controversy. He held the view that there is not really any AI, since what we call AI is, in fact, ordinary engineering, and indistinguishable from other branches of computer science. I think I endlessly argued about this claim, because truly, if I were to accept it, I would have also accepted that I have devoted myself to an empty cause.

However, today, in honoring my old adversary, I will accept his argument, in at least two ways. With corrections, of course.

Today, witnessing the shift of funds to narrow AI projects like “computer vision”, search engine stuff and whatnot, real AI researchers have had to re-invent new field names for their work, such as AGI (Artificial General Intelligence). I have absolutely no problem with the latter term, however I feel it should not have been necessary.

First, narrow AI is not AI, because it fits Longley’s argument. Narrow AI projects take an arbitrarily ultra-special and ultra-trivial problem (compared to AI in its full glory) and contrive an engineering system to solve it to some extent. Many are the natural intelligences who think that AI can be achieved by accumulating these limited solutions, which is not the case. However, the worse fact is that this brand of work is not AI at all and should not be considered as such. Rather, these are technical problems that are not even interesting from an AI point of view. It seems more like a way to make graduate students busy writing papers, and keep them away from working on any core scientific problem. Generalist research such as Statistical Learning Theory was of course AI, but browsing through the papers on respectable AI journals, one sees that there is little focus on making general AI software. The machine learning papers are especially troublesome. Most papers are like “I combined 4-5 different algorithms for solving this weird and highly unlikely kind of problem”, or “I applied Bayesian Nets to yet another problem”. Even the goals of many papers are extremely narrow, to avoid any risk of failure, perhaps. This kind of safe-research has resulted in a degeneration of AI research. I used to say that most of the GOFAI research showed the intelligence of the researcher rather than the program, now it has gotten worse. Most of those things are honest and well-crafted engineering solutions with good technical work behind them, but they also have little to do with AI (they unfortunately have little cognitive significance).

Can we give a technical reason for this? Of course, we can. The set of problems that a machine can solve, and the subset relation may be considered as the partial order of intelligence (borrowing the definition from Hutter’s book “Universal Intelligence”). The set of problems that those ultra-specialized problems can solve are so small compared to the entire set of AI problems, that their order of intelligence is very low compared to AGI systems, and they remain invisible from the point of view of AGI researchers. Yet I do not wish to belittle narrow AI research at all, for I too have spent a lot of time thinking about simple problems, and sometimes out of the simplicity very elegant and valuable algorithms may emerge. Some of those algorithms will eventually find their way into AGI systems as well. For instance the inside-outside algorithm. Excellent algorithm, and not invented for AI at all. Similarly, we will see that many of the same problems may have been addressed elsewhere in the research community, and there is absolutely nothing wrong with the richness of a research environment. In my current research, I make great use of my experience in data mining, and I think to myself “Oh, so that is what data mining really is for!”. However, I wish to warn of the wrong focus on the narrowest solutions possible.

Secondly, AGI is just computer science. Every algorithm encodes a bit of intelligence, I remember that in one of the introductory textbooks to Computer Science, there was a phrase “Computer Science as the mechanization of abstraction”, or something to that effect. I think it would be better to say that it is mechanization of intelligence. Every algorithm contains some intelligence, together with our intelligence that makes sense of those algorithms, computer science becomes a great tool-box to solve many problems. Knuth once said that there are about 500 core algorithms in computer science, so if you understand all of those well enough, you have a good understanding of basic computer science. That number is more than the number of core algorithms that most computer scientists know of. You would probably have to have a genuine interest in algorithms to study them all (which I started at one point to increase my algorithmic complexity). However, those algorithms are not “core” algorithms because they solve irrelevant and meaningless puzzles (like mathematicians love). They are considered “core” algorithms because they can be used in a variety of problems. Contrast this with the intelligence partial order. The value of an algorithm lies in how many programs it is useful for. Thus, AGI is a very pure kind of computer science, because it also seeks to mechanize computer science itself, replacing the computer scientist with a program (like Schmidhuber’s research goal of building an “optimal scientist”). This also ties in beautifully with the philosophy of science, mind and mathematics, and means a lot to those with a foundational mindset. AGI is to Computer Science, what General Relativity is to Physics. Simple slogans may stick, perhaps. The point here being is that, a key to increasing the research funds of AGI is to emphasize the pure science aspect of AGI research. That it will not only allow us to build extremely powerful AI systems, but it will also increase our foundational understanding of computer science and science itself, with fundamental contributions to all disciplines of cognitive science (philosophy, linguistics, neuroscience, computer science, psychology, etc.), perhaps essentially giving them new tools and methods in their respective fields. If we AGI researchers can find ways to demonstrate the foundational benefits better, we will draw more interest from research funding agencies.

Narrow AI is not AI (And AGI is just computer science!)

examachine

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at http://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »