top of page

Why Computers Won't Improve Their Own Intelligence and Make Themselves Smarter???

Updated: May 31, 2021

We both fear and desire "the singularity." Yet it's unlikely to happen.


St. Anselm of Canterbury introduced an argument for God's existence in the eleventh century that went something like this: God is, by definition, the greatest being we can imagine; a God who doesn't exist is simply not as awesome as a God who does; ergo, God must exist. This is referred to as the ontological statement, and it has persuaded enough people that it is still being debated nearly a thousand years later. Some critics argue that the ontological claim simply determines a being into existence, which is not how definitions operate.


People have tried to convince God to live, but he isn't the only one. In 1965, mathematician Irving John Good wrote, "Let an ultra-intelligent machine be described as a machine that can far surpass all the intellectual activities of any man, however clever":



Since one of these intellectual tasks is computer design, an ultra-intelligent machine could design even better machines; there would certainly be an "intelligence explosion," and man's intelligence would be far behind. As a result, the first ultra-intelligent computer is the last invention that man will ever need to create, assuming that the machine is docile enough to tell us how to keep it in check.


The notion of an intelligence explosion was resurrected in 1993 by author and computer scientist Vernor Vinge, who coined the word "singularity," and it has since gained prominence among technologists and philosophers. Books like Nick Bostrom's "Superintelligence: Routes, Threats, Tactics," Max Tegmark's "Life 3.0: Being Human in the Age of Artificial Intelligence," and Stuart Russell's "Human Compatible: Artificial Intelligence and the Issue of Power" all depict scenarios of "recursive self-improvement," in which an artificial-intelligence programme repeatedly designs a better version of itself.


I assume that Good's and Anselm's claims have something in common, namely that the initial meanings do a lot of the work in both cases. These meanings seem to be rational on the surface, which is why they are commonly accepted at face value, but they warrant further investigation. I believe that the deeper we look at Good's implicit assumptions, the less realistic the notion of a knowledge explosion seems to be.



What would recursive self-improvement for humans look like? We'll refer to human intelligence as I.Q. for the sake of simplicity, not because of I.Q. testing is a good idea, but because of I.Q. expresses the idea that intelligence can be usefully captured by a single number, which is one of the claims made by advocates of an intelligence explosion. Recursive self-improvement will then look like this: One of the problems that a person with an I.Q. of, say, 300 will solve is how to turn a person with an I.Q. of 300 into a person with an I.Q. of 350. Then a person with a 350 I.Q. would be able to solve the more difficult problem of translating a 350 I.Q. into a 400 I.Q. The list goes on.


Is there any reason to believe that this is how intelligence functions? We don't, in my opinion. For eg, there are many people with I.Q.s of 130, but only a small number of people with I.Q.s of 160. None of them has been able to raise someone's intellect from 70 to 100, which is supposed to be a simpler challenge. None of them may even improve the intelligence of animals whose intelligence is too poor to be assessed using I.Q. tests. If you want to improve someone's I.Q., there are a few things you can do. If growing someone's I.Q. is like solving a series of math puzzles, we could see examples of it working at the lower end, where the problems are simpler to solve. However, there is no clear proof that this is the case.


Maybe it's because we're all so far away from the desired threshold; maybe an I.Q. of 300 is all that's required to boost anyone's intelligence. Even if that were true, we have no reason to assume that infinite recursive self-improvement is a plausible possibility. It's entirely plausible, for example, that a person with a 300 I.Q. will only improve another person's I.Q. to 200. That would cause a person with a 300 I.Q. to give anyone around them an I.Q. of 200, which would be an incredible achievement. However, we will still be stuck on a plateau, with no recursive self-improvement or knowledge explosion.



“If the human brain were so simple that we could understand it, we would be so simple that we couldn't,” I.B.M. research engineer Emerson Pugh is quoted as saying. This assertion is intuitively true, but we can also point to a particular illustration to back it up: the microscopic roundworm C. elegans. It is likely one of the most well-studied species in history; scientists have sequenced its genome and mapped every association between its three hundred and two neurons, as well as the lineage of cell divisions that give rise to each of the nine hundred and fifty-nine somatic cells in its body. However, they are still baffled by its actions. The average human brain has 86 billion neurons, and we'll probably need the majority of them to understand what's going on in C. elegans's 320; this ratio doesn't bode well for our chances of understanding what's going on inside ourselves.


Some supporters of an intelligence explosion claim that a system's intelligence can be increased without completely comprehending how it operates. They mean that intelligent entities, such as the human brain or an artificial intelligence programme, have one or more secret "intelligence knobs," and that all we have to do is be smart enough to locate them. I'm not sure we have many suitable applicants for these knobs right now, so judging the viability of this concept is difficult. Increase the speed of the hardware on which a programme runs is perhaps the most widely proposed way to "turn up" artificial intelligence. Some claim that if we build software that is as intelligent as a person, running it on a faster machine would essentially establish superintelligence. Is there a risk of an intelligence explosion as a result of this?


Let's pretend we have an artificial intelligence programme that is as smart and competent as the average human computer programmer. Let's say we increase the computer's speed by a factor of a hundred and run the software for a year. That'd be the same as confining a typical human being in a room for a hundred years with little to do but focus on a programming job. Many people would consider this a torturous jail sentence, but let's say for the sake of this case that the A.I. doesn't feel the same way. We'll presume that the A.I. possesses all of the positive characteristics of a human being but none of the characteristics that would serve as roadblocks in this case, such as a desire for novelty or the ability to make one's own decisions. (I'm not sure this is a rational assumption, but that's a debate for another time.)


But now we have a human-equivalent A.I. that is working on a single task for a hundred person-years. What kind of outcomes would we expect from it? Assume this A.I. will write and debug a thousand lines of code per day, which would be a huge amount of work. At that pace, a century will almost be enough time for it to write Windows XP on its own, which is said to contain 45 million lines of code. That's an amazing achievement, but it's a long way from being able to create an A.I. that's smarter than it. Developing a smarter A.I. necessitates more than just the ability to write good code; it also necessitates a significant advance in A.I. science, which no average computer programmer can guarantee, no matter how much time they are given.




A compiler is a programme that is usually used when creating software. The compiler converts source code written in a language like C into an executable programme, which is a file containing machine code that the computer understands. Assume you're unhappy with your C compiler, which you'll refer to as CompilerZero. CompilerZero takes a long time to process the source code, and the programmes it produces run slowly. You believe you can do better, so you create a new C compiler that produces more powerful machine code; this new compiler is referred to as an optimising compiler.


Since your optimising compiler is written in C, you can use CompilerZero to convert your source code to an executable programme. CompilerOne is the name of the software. CompilerOne now creates programmes that run faster thanks to your imagination. However, since CompilerOne is a CompilerZero product, it still takes a long time to run. What choices do you have?


CompilerOne is capable of compiling itself. CompilerOne takes the source code and converts it into a new executable file with more powerful machine code. This is known as CompilerTwo. CompilerTwo not only creates fast-running applications, but it also runs fast. You've built a self-improving computer programme, which you should be proud of.


But that's what there is to it. When you feed the same source code to CompilerTwo, it literally produces a duplicate of CompilerTwo. It won't be able to make a CompilerThree and start a chain of ever-better compilers. You'll have to look elsewhere if you want a compiler that produces programmes that run at breakneck speeds.



The process of making a compiler compiles itself is known as bootstrapping, and it has been used since the 1960s. Compiler optimization has come a long way since then, and the gaps between a CompilerZero and a CompilerTwo can now be much larger than they were previously, but all of this progress was made by human programmers rather than compilers improving themselves. Compilers, while not artificial intelligence systems, provide a useful precedent for thinking about the idea of an intelligence explosion since they are computer programmes that produce other computer programmes, and optimization is always a priority as they do so.


The more you know about a program's intended use, the more you can refine its code. Human programmers often hand-optimize parts of a programme, which means they define the machine instructions directly; since they know more about what the programme is supposed to do than the compiler, they can write machine code that is more effective than what the compiler produces. Compilers for what are known as domain-specific languages, which are designed for writing small categories of programmes, do the best job of optimization. There's a programming language called Halide, for example, that's designed specifically for writing image-processing programmes. A Halide compiler can produce code that is as good as or better than what a human programmer can write because the intended use of these programmes is so precise. However, a Halide compiler cannot compile itself since an image processing language lacks all of the features needed to write a compiler. To do so, you'll need a general-purpose language, and general-purpose compilers struggle to generate computer code as well as human programmers.


Anything must be able to be compiled by a general-purpose compiler. It will create a word processor if you feed it the source code for one; an MP3 player if you feed it the source code for one; and so on. If a programmer invents a new type of programme tomorrow, something as foreign to us today as the first Web browser was in 1990, she will feed the source code into a general-purpose compiler, which will produce the new programme. Despite the fact that compilers are not intelligent in any way, they share one trait with intelligent humans: they can handle inputs that they have never seen before.


As compared to how A.I. systems are actually built, this is a huge improvement. Consider an A.I. programme that is given chess moves and is only required to spit out chess moves in response. It has a very specific job to do, and understanding that will help you maximise its efficiency. The same can be said for an A.I. programme that is only given “Jeopardy!” clues and is only required to spit out questions as answers. A few artificial intelligence systems have been created to play a few similar games, but the expected range of inputs and outputs is still extremely limited. Assume you're writing an A.I. programme with no previous knowledge of the types of inputs it will experience or the format in which a correct answer will be given. It's difficult to maximise efficiency in that situation because you have no idea what you're optimising for.



How much can generality be improved? To what degree can you configure a device for any imaginable scenario at the same time, even ones you've never experienced before? Any development is presumably possible, but the concept of an intelligence explosion means that there is almost no limit to the amount of optimization that can be accomplished. This is a powerful argument. I'd like to see some reasons besides citing examples of optimization for specialised tasks if anyone argues that infinite optimization for generality is feasible.


Obviously, none of this establishes the impossibility of an intelligence explosion. Indeed, I doubt that such a thing could be proven because such matters are unlikely to come under the purview of mathematical proof. This isn't about demonstrating the impossibility of something; it's about determining what constitutes good reasoning for belief. The opponents of Anselm's ontological argument are not seeking to show that there is no God; rather, they are arguing that Anselm's argument does not offer a convincing justification to believe in God. Similarly, a description of an "ultraintelligent computer" is not enough to persuade us that we can create one.


There is one sense in which I believe recursive self-improvement is a useful term, and that is when we consider human civilization's overall capabilities. This is not to be confused with human intelligence. There's no reason to think that ten thousand years ago's humans were any less intelligent than today's; they had the same capacity to learn as we do. However, we already have access to ten thousand years of technical developments, and those advancements aren't only physical—they're even cognitive.


Consider the difference between Arabic and Roman numerals. Multiplication and division are simpler with a positional notation scheme, such as the one provided by Arabic numerals; if you're playing in a multiplication contest, Arabic numerals give you an advantage. However, I would not consider anyone who uses Arabic numerals to be smarter than someone who uses Roman numerals. To use an example, if you're trying to tighten a bolt with a wrench, you'll do better than anyone with pliers, but it's not fair to say you're bigger. You've got a tool that provides you with a mechanical advantage; We can only reasonably determine who is better if we offer your opponent the same method. A similar benefit is offered by cognitive instruments such as Arabic numerals; if we want to compare people's intellect, we must use the same tools.



Simple tools allow the development of more complex ones; this is true for cognitive as well as physical tools. Thousands of such instruments have been developed throughout history, ranging from double-entry accounting to the Cartesian coordinate system. So, while we aren't any smarter than we were before, we now have a broader range of cognitive resources at our disposal, allowing us to create even more effective tools.



This is how recursive self-improvement works—not at the individual level, but at the collective level of human society. I wouldn't say that Isaac Newton improved his intelligence by inventing calculus; he had to be pretty smart to come up with it in the first place. Calculus allowed him to solve problems he couldn't solve before, but it was the rest of humanity who benefited the most from his invention. Many who came after Newton benefited from calculus in two ways: in the short term, they were able to solve problems that they had previously been unable to solve, and in the long term, they were able to expand on Newton's work and devise new, much more effective mathematical techniques.


This willingness of humans to build on each other's work is precisely why I don't believe that running a human-equivalent A.I. software in isolation for a hundred years would result in significant breakthroughs. A single person working in complete isolation may make a breakthrough, but it's unlikely to happen again; it's best to have a large group of people taking inspiration from one another. They don't have to work together directly; any area of study would benefit from having a large number of people working on it.


Take, for example, the analysis of DNA. After publishing their paper on the structure of DNA in 1953, James Watson and Francis Crick were both involved for decades, but they were not responsible for any of the significant breakthroughs in DNA research that followed. They didn't discover DNA sequencing techniques; someone else did. Someone else invented the polymerase chain reaction, which made DNA synthesis affordable. This is not intended to be a jab at Watson and Crick. It basically means that even if you had AI versions of them and ran them at a hundred times normal speed, you wouldn't get results as good as what we got with molecular biologists all over the world studying DNA. Scientists learn from the work of other scientists because innovation does not happen in a vacuum.



Even without a computer capable of designing its replacement, the pace of progress is growing and will continue to do so. Some may refer to this phenomenon as an intelligence explosion, but I believe it is more appropriate to refer to it as a technological explosion that involves both cognitive and physical technologies. Computer hardware and software are the most recent cognitive inventions, and while they are important aids to creativity, they are incapable of causing a technological revolution on their own. That requires people, and the more the better. Giving a smart person better hardware and software is helpful, but the real benefits come when everyone has them. The use of such cognitive resources by billions of people has resulted in our current technological explosion.


Might A.I. systems take the place of those people, causing a digital explosion to happen faster than the physical one? Perhaps, but think about what it would mean. The most likely strategy to succeed will be to simulate the entirety of human society in software, with eight billion human-equivalent A.I.s going about their business. That's probably too costly, so the challenge becomes finding the smallest subset of human society capable of generating the majority of the invention you pursue. One way to understand this is to consider how many people are needed to complete the Manhattan Project. It's important to remember that this is not the same as asking how many scientists worked on the Manhattan Project. The pertinent question is: How big a population do you need to hire from in order to staff such an initiative with enough scientists?


In the same way that only one person in a thousand will earn a PhD in physics, it's possible that you'll need to create thousands of human-equivalent A.I.s to get one Ph.D.-in-physics-equivalent A.I. In 1942, the Manhattan Project needed the combined populations of the United States and Europe. When it comes to recruitment, research labs no longer limit themselves to two continents because creating the best team possible necessitates drawing from the largest pool of talent available. If the aim is to produce as much creativity as the entire human race, the initial number of eight billion might not be able to be drastically reduced.



We're still a long way from producing a single human-equivalent A.I., let alone billions of them. The current technological explosion will be powered for the near future by humans using previously created methods to invent new ones; there will be no “last innovation that man needs to make.” This is comforting in one sense, because, contrary to Good's assertion, human intellect can never be "left far behind." But, just as we shouldn't expect a superhumanly intelligent A.I. to save us in spite of ourselves, we shouldn't expect a superhumanly intelligent A.I. to kill humanity. For better or worse, human decision-making can determine the destiny of our species.


READ MORE-




 

This Article Is based on "Why Computers Won’t Make Themselves Smarter" by Ted Chiang


Ted Chiang is a science fiction author who has won several awards. In 2016, the film "Arrival" adapted the title storey from his first collection, "Stories of Your Life and Others." He lives and works as a freelance technical writer in Bellevue, Washington.


 

To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.

Source: The New Yorker, "Why Computers Won’t Make Themselves Smarter" by Ted Chiang

64 views0 comments
bottom of page