Introduction
Heuristic Algorithmic Memory is my primary contribution to AI. It is a mathematical, general model of both short-term and long-term memory for any kind of inductive inference task (!), meaning any kind of properly conceived machine learning problem in AI. The reviewers hadn’t found that claim believable at first, but it’s eventually going to be anchored by experiments. The name was obviously inspired by a certain science fiction universe. The algorithms were designed in response to a challenge posed by Ray Solomonoff himself, he described the update problem to me, and inquired, “How would we update a guiding pdf of programs if we already had an initial grammar for the reference machine?”. He was the one who suggested using the stochastic context-free grammar, and for those who do not recall, he invented the stochastic context-free grammar model. These papers constitute my answer to the update problem which he recounted as one of the three open problems in AI (I could address the other two only later)
AGI 2010: Sequential Levin Search + HAM with SCFG
Stochastic Grammar Based Incremental Machine Learning Using Scheme
E Ozkural, C Aykanat Artificial General Intelligence, p. 190, 2010
Aykanat did not contribute to this article, I mistakenly added my advisor’s name thinking that I had to list my advisor in any paper I wrote. I had later learnt that this was a violation of academic integrity, and I tried to change the authors but the publisher decided to quarrel with me instead. In fact, my PhD jury prevented the addition of this paper’s chapter and my generalization of frequent itemset mining to the thesis (as well as vanishing a 60 page PhD proposal on web site categorization), possibly to make the thesis look ordinary so that it wouldn’t be obvious that they stalled the graduation of an inventive student. I find it amusing that the chapters they suppressed are more significant than the chapters published, and would make 2-3 PhD theses if we count by innovations. The original submission is linked from the same article and is titled:
E Ozkural, AGI-2010 submission
The first version reported a sequential implementation effort and also contains a discussion of the methods required to implement the entire Scheme language as the reference machine.
AGI 2011: Parallel General Levin Search + HAM with SCFG
Teraflop-scale incremental machine learning
E Özkural, arXiv preprint arXiv:1103.1003, March 2011
E Özkural, Artificial General Intelligence, p. 382-387, 2011
The second paper reports the parallel implementation and experiments. The preprint and arxiv are essentially the same paper. The same toy experiments in the original paper are analyzed more rigorously. Again, I received some hostile reviews which were crafted to prevent the publication with some excuse, possibly from some rival (?) authors or did not even read the paper — why would anyone do that?
AGI 2014: HAM with SCSG
E Özkural, International Conference on Artificial General Intelligence, p. 121-132, 2014
The third paper extends the method to context-sensitive grammar, explains the technique of applying stochastic grammar induction as a memory model in universal induction, and introduces several efficient algorithms to achieve it. It’s not fully implemented, but is the most complete account published so far, and it was quite well-received, finally, by the reviewers.