Video log from Gok Us WHQ:

Transcript

Hello there,

My name’s Eray Ozkural and I’d like to talk to you about the intelligence explosion, the infinity point hypothesis, the law of accelerating returns and the relations between them. In 1965, I. J. Good defined an ultra-intelligent machine as one that can surpass human intelligence in all intellectual areas. Such a machine can conceivably design a better machine than itself, leading to iterative self-improvement. Which is also known as the intelligence explosion, since such intelligence would likely reproduce. Good memorably said such a machine would be the last invention man ever need make. OTOH, Solomonoff’s infinity point hypothesis investigates what would happen if we augment computer science community with human-level artificial intelligence, increasing its collective intelligence. As you might know, Moore’s law says that, computing power per dollar, or the number of transistors that can be inexpensively placed on a microchip, doubles every 18 months. At the time of Solomonoff’s paper, in 1985, he wrote that this takes about 4 years, the doubling that is. So I’ll quote from his paper, the best summary to present in this talk:
“While there is normally an exponential increase in computing power with time, halving every four years or so, when the artificial intelligence community is as large as the human scientific community, the halving time itself will halve, so we get halving two years instead of four.”

This relates the improvement in cost effectiveness of computing hardware with the size of the computer science community, artificial and natural. He formalizes this idea, saying that the rate of the increase of our computer science community will be related to how much computing speed we add it to with artificial intelligence at a fixed cost. And he also said that the rate of the change of the logarithm of our efficiency is proportional to the size of our computer science community, which relates halving time to the community’s collective intelligence level. Therefore, by assuming that a fixed amount of money is invested in AI every year, and that the size of CS community influences the rate of improvement in computing hardware cost effectiveness, he arrives at the conclusion that there would be infinite amount of improvement in a short time. How can an infinite amount of improvement happen? Of course, only mathematically. He does mention that there are quantum limits to computation. Those limits are examined in 1999 in a paper by Seth Lloyd titled “Ultimate Physical Limits to Computation”. and you can find it on arxiv.org. The physical limits to computation naturally lead to a device that’s not quite user friendly. The 1kg ultimate laptop is almost like a black hole. It’s not something that you could carry around in your backpack. Therefore, it’s not very conceivable that we will actually reach these ultimate limits. We might get close, but probably not too close. And I happen to have calculated when that would happen if the infinity point effects continue unaltered. That would happen in around 2065. It wouldn’t matter if we didn’t achieve as much, or as fast, we would still have immense computing power by 2030’s. It’s also conceivable that Moore’s law will taper off after some time, perhaps around 2040 and it will linger indefinitely, since it cannot really reach physical limits.

Let us also consider what would happen if infinity point effects did not occur at all.
Assuming that Moore’s law continued as usual, Lawrence Krauss and Glenn Starkman calculated in their paper on arxiv titled “Universal Limits on Computation” that the physical limits of computation would be reached in about 600 years. That is still not so far off, and we would still get incredibly powerful computers within 21st century, even if the AI project fails.

However, as Kurzweil projected, there are other areas of technological improvement than miniaturization of computer hardware. To relate to those two other ideas, Solomonoff’s idea is a specific case of an ultra-intelligent machine, it just assumes that there is an AI at the level of a human-being, and that it can co-operate with the CS community. And we use that AI to increase the size of the community with artificial researchers, perhaps just scientific tools. And Sol thinks that these artificial researchers can improve the miniaturization of computer hardware, accelerating Moore’s law, which is precisely what Kurzweil predicts with his law of accelerating returns. These are the same thing. However, Kurzweil’s theory generalizes the infinity point hypothesis, because it considers other areas and gives us greater possibilities. For instance the communication technology is also improving, bandwidth is getting cheaper. This means that we can build a very powerful global computer by using the internet as the intercommunication network of a huge parallel computer, like what the seti@home project or grid computing do. And another related area is energy technology, solar power, for instance, or fusion power. If these turn out to be affordable, if the cost of new energy tech. decreases exponentially, making it much cheaper than fossil fuels, this would allow us to build with fixed dollars, more powerful computers that spend more energy. Hence, it is possible that both miniaturization and these other related techs will accelerate, and perhaps by 2030’s, we can build very energetic and powerful supercomputers easily. By then, AI may have taken off and started to contribute to CS, and then Moore’s law could be doubling every 6 months. Because by 2025, we will have human brain equivalent computers at an affordable price. For about 1000 dollars we will be able to purchase a computer that’s as fast as a human brain. If we have adequate software by then, it might be possible to offload some research tasks onto these computers, and of course there would be huge incentive for researchers to use as much computing as possible to accelerate this research. If they then apply this to all aspects of AI research including algorithms, how to best train those AI’s, how to solve problems better, how to design better cognitive architectures, how to build better hardware, faster networks, faster chips, better programming languages, and so forth, just like electrical engineers working on chip design, miniaturization of computer hardware, making improvements at every level of microchip industry, such as the logical design, the instruction set architecture, microcode, the physical layer, various CAD tools used to optimize the design, etc., there are so many levels in microchip architecture, and they have to improve all of those layers separately. Just like that, AI reseachers will use AI’s themselves, to improve all of those layers that can contribute to the intelligence level of an AI, and thus create much more powerful Ai’s in a short time. Can we quantify how much improvement we can expect by 2040 or 2045 as Kurzweil says? I think not because it’s not so clear whether the assumptions in Solomonoff’s infinity point hypothesis hold precisely. Whether it is really the case that the rate of the change of the log of our efficiency is proportional to the size of our computer science community. However, that can be tested, I think, or at least reasoned about, by projecting this idea to the past. In the past, naturally, there was no AI. The tools that we use to design computers can be considered a kind of weak form of AI, but it’s not so obvious how much they have accelerated computer miniaturization. Because computer architects have been using those tools like netlist partitioning, etc., for a very long time, and those tools have worked very well for a long time, though increased computation speed and memory would contribute to better tools, it does not seem that the majority of improvement comes from CAD tools, rather if we respect the infinity point hypothesis, it might come from the very size of the computer science community. So if we plotted the size of CS community, engineers working on computer hardware specifically, since the dawn of the computer age, also the total money invested every year to computer hardware, we might be able to see a macro-level relation between money, number of researchers, and how fast computer hardware improves, thus verifying whether the second assumption in infinity point hypothesis is valid.

To summarize, intelligence explosion is a most general idea. It’s an abstract idea that encompasses them all in sense. Solomonoff’s infinity point hypothesis is about miniaturization of computer hardware, and how AI can be used to accelerate the rate of improvement. Kurzweil’s law of accelerating returns generalizes the positive feedback idea in Ray Solomonoff’s infinity point hypothesis. It might be possible that miniaturization of computer hardware alone will lead us to the kind of technological progress that Kurzweil predicts. However, along with improvements in communication, energy, and related technologies, such as improvements in algorithms, it may be possible that vastly powerful AI’s may be constructed in the near future.

The future is burning brighter than ever. It’s a great time to be alive in. Thanks for listening and I hope to meet you again.

Intelligence Explosion, Infinity Point, and the Law of Accelerating Returns

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate ยป
%d bloggers like this: