Teramachine marches on towards primary-school student level.
Function inversion (in the mathematical sense) with one example (assuming one-shot learning) is implemented like this in the current code, to give you a flavor of how neat O’Caml code can look:
let invert g name f x = let fname = sprintf "f_inv_%s" name in let (sentence, pi, deriv_tree) = if pid=0 then printf "Inverting function %s %sn" name f; lsearch_par g (fun prog -> sprintf "(eqv? (begin %s (%s (%s %s)) ) %s)" prog fname f x x) [Terminal "(define ("; Terminal fname;spc; Terminal "var1";Terminal") "; NonTerminal "body"; Terminal ")"] ["var1"] in let prog = string_of_sentence sentence in lprintf "Found program %sn" prog; g#add_solution fname ["var1"] prog; g#add_abstract_expressions deriv_tree; if debug1 && pid=0 then g#lprint; prog
Hopefully, you are seeing how easy it is to use the teramachine Levin search and guiding pdf update functions. It’s going to get a little leaner after I refactor, there are too many search and update algorithms, I have to merge and remove cruft. However, still, inverting a function, with long-term recall, had never been easier! After you solve a particular function inversion problem, the teramachine can solve other prediction problems and re-use the algorithmic information it learnt from this inversion problem. Thus, the system has algorithmic memory.
I’m right now integrating and debugging the new memory update algorithms. The memory algorithms form the long-term memory of the teramachine at the parallel AI kernel level, so when we are building an application using the teramachine, we do not have to worry about the memory of the system, it’s automatically taken care of at a low-level much like how each brain faculty has its own local memory. The new algorithm I’m testing with one-shot function inversion problems right now is the programming idiom learning algorithm. This algorithm can learn of syntactic abstractions, a completely new one. It is one of the several synergistic update algorithms of my Heuristic Algorithmic Memory, a general-purpose incremental machine learning system that is turning out to be a capable realization of the Solomonoff Alpha machine Phase 1, which is the basis of an extremely powerful AI system. I am paying a lot of attention to careful and rigorous implementation of the new algorithms. These algorithms are not limited to the Alpha machine however, they can be used to build pretty much any AI system. Future applications will show just how versatile. Hopefully, you will see the results of these algorithms in my upcoming papers.
Here is how a basic training sequence comprised of three function inversion problems work, see the parallel kernel in action:
centauri:examachine malfunct$ mpirun -np 2 training_seq0 Inverting function id (lambda (x) x) generating top forms t=1.000000e+06 distributed jobs 3 trials made, 1 errors, in 30 cycles, 6.597000e+04 allocated cycles levin search terminated with (define (f_inv_id var1) var1) after 3 trials in last step, total trials=3, total errors=1, total cycles=30 id=(define (f_inv_id var1) var1) Inverting function inv (lambda (x) (/ 1 x)) generating top forms t=1.000000e+06 distributed jobs 38139 trials made, 24927 errors, in 500718 cycles, 1.568402e+06 allocated cycles t=2.000000e+06 distributed jobs 61044 trials made, 45885 errors, in 949534 cycles, 3.687483e+06 allocated cycles levin search terminated with (define (f_inv_inv var1) (/ var1)) after 61044 trials in last step, total trials=99183, total errors=70812, total cycles=1450252 inv=(define (f_inv_inv var1) (/ var1)) Inverting function sqrt (lambda (x) (sqrt x)) generating top forms t=1.000000e+06 distributed jobs 36704 trials made, 25309 errors, in 501053 cycles, 1.528999e+06 allocated cycles t=2.000000e+06 distributed jobs 75757 trials made, 53413 errors, in 1067814 cycles, 3.921349e+06 allocated cycles t=4.000000e+06 distributed jobs 159184 trials made, 114409 errors, in 2270386 cycles, 9.704485e+06 allocated cycles t=8.000000e+06 distributed jobs 154224 trials made, 123374 errors, in 2517241 cycles, 1.903810e+07 allocated cycles levin search terminated with (define (f_inv_sqrt var1) (* var1 var1)) after 154224 trials in last step, total trials=425869, total errors=316505, total cycles=6356494 sqr=(define (f_inv_sqrt var1) (* var1 var1))
Note that “the language of thought” here is scheme all right, the whole language and library.
Would you guess that you’d have to go through about 425869 trials to invert the square root function? That’s how hard intelligence is. Of course that number would decrease considerably with smarter search algorithms that take advantage of programming language semantics, ours has little semantic knowledge at the moment, and generates spurious programs. Right now, it’s mostly a syntactic intelligence. Chomsky should like it, no? For instance, adding type information could considerably improve search, but Scheme is probably not the right language for that. The next version of teramachine, the petamachine will likely work with an ML variant, similar to ADATE.
The log also reveals that the abstract expression algorithm works 🙂 Yay
Found program (define (f_inv_sqrt var1) (* var1 var1)) adding solution (define (f_inv_sqrt var1) (* var1 var1)) abstr-exprs=[ <:operand:><:operand:>; <:expression:><:expression:>; <:variable:><:variable:> ]
The first programming idiom learnt was the idiom of doubling an expression, of course I’m hoping to find more interesting idioms. The function inversion was the first test of the levin search that I had written, by the way, it’s a simple thing really, yet it’s a good way to test if the program search works well. Teramachine currently supports generalized levin search that accepts scheme expressions (used in this example), function inversion and operator induction. When the basic memory algorithms are completed, it will also work on the remaining two kinds of induction: sequence prediction and set induction. Those are not difficult to program, the trouble is bringing those together to build higher intelligence.