https://www.garynorth.com/public/12901print.cfm

Is the Terminator Coming? Reflections on Moore's Law

Gary North - September 15, 2014

Remnant Review

Moore's law: the number of transistors per square inch on a chip doubles every [??] months. The number of months gets shorter, decade by decade. The pace has accelerated since 1965, when Moore made his observation. It may be as low as 12 today.

The cost of information keeps dropping. It gets less, decade by decade. This has been continual since at least the U.S. census of 1890 -- the first punch card census.

"When the price drops, more is demanded": the law of demand. It is the foundation of economic science.

A constant rate of growth eventually produces an exponential curve. As I described in 1970, continuity produces discontinuity.

As I also argued in 1970, every exponential upward curve has always slowed, then stopped. It has become S-shaped. It runs out of resources. This is the law of diminishing returns. Economists have declared this for almost two centuries. But the West has had compound growth for over two centuries. The curve has not stopped. It has extended to the whole world, as free markets have extended through price competition. Liberty is getting less expensive. More of it is demanded. Price competition works. This is a very good thing.

Then there is the one irreplaceable resource: time. Time is an arrow. It does not run backward. The second law of thermodynamics is a law. Things run down. They run out. Above all, time runs out. The world is running down.

This raises the ultimate question of our era: Is Moore's law really a law, or is it an observation of a temporary phenomenon? Moore thinks it is the latter.

Some observers don't.

SUPERINTELLIGENCE

Reason published a favorable review of Nick Bostrom's book, Superintelligence: Paths, Dangers, Strategies. It was published by Oxford University Press.

Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.

If accurate, this summary is by far the most apolcalyptic I have seen in a book published by a major university press. I think it is an accurate summary. But is the assessment accurate? I don't think so, but I base this on theology, not technology.

Here is my position. Knowledge is always analogical. Data can be digital. But knowledge is a matter of judgment, and judgment is analogical. A machine cannot exercise judgment. It can only respond to data structured by mathematical equations. Executing an algorithm is not the same as exercising judgment.

If you kick a machine you are imputing humanity to a machine. It is not alive.

A machine is neither gracious nor malevolent. It does not care.

The world is not impersonal. A machine is. It is therefore not responsible. Men are.

A machine has no soul to damn and no butt to kick.

Intelligence is a matter of judgment. It is not a matter of digital data and algorithms.

A machine is a tool. The problem is, evil people can get their hands and minds on powerful tools.

This is why I do not worry about this scenario.

Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. In 1965, the mathematician I.J. Good observed that technology arises from the application of intelligence. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop--an intelligence explosion--in which self-improving intelligence bootstraps its way to superintelligence. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.

About 10 percent of AI researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Fifty percent think it will be developed by the middle of this century, and nearly all think it will be accomplished by century's end. Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an AI could do a year's worth of thinking every 31 seconds.

Is this science fiction? The experts say it isn't. It is merely a scenario that is an extension of existing trends, extrapolated out less than a century.

The key to understanding this scenario is the definition of intelligence. The experts see intelligence as a mixture of digital data and algorithms. It is decision making devoid of ethics. It is decision making without responsibility.

That is science fiction.

CONTINUITY AND DISCONTINUITY

Here is my concern. I can imagine a series of events that could reverse this trend. They all involve a breakdown in the international division of labor. What I cannot imagine is an event that could reverse this trend, which would not in itself be apocalyptic. I cannot see a reversal of this trend that is based on a rival trend. The continuity indicates super-intelligence. There is no continuity that indicates a cessation of the existing continuity. The magnitude of a discontinuity that might reverse this is so immense from a social standpoint that it represents a true collapse of the social order. In other words, the existing trend is so much a part of the existing social order that anything that could reverse the trend threatens the existing social order.

The longer the trend goes on, the greater the magnitude of the discontinuity that would be sufficient to reverse the trend. Furthermore, the longer the continuity goes on, the more interdependent the entire social order is on the continuation of the trend.

Put a different way, damned if we do, and damned if we don't.

The basic trend is the trend towards super-intelligence, we are told. But is it? Are we facing machines that evolve into intelligent creatures, then super-intelligent creatures, and then masters? Is it going to be the terminators vs. humanity? Is a war coming: men vs. machines. That make for great science fiction. It makes for questionable ethics.

All of this has to do with the division of labor. It has to do with the interdependence of the economic system. It is what Hayek wrote about back in 1945. The free market system is bringing information into the marketplace. The possessors of this knowledge are in search of profit: exchanging one set of conditions for a better set, at an above-average rate of return. Hayek was correct: there is no government agency that can assemble the intellectual firepower to match the knowledge that is available on a decentralized basis in a free society. This is a defense of liberty. I thoroughly accept this defense of liberty.

DECENTRALIZATION AND OPPORTUNITY

This leads me to a consideration of what decentralization is doing with respect to making sophisticated information available to millions of people who would never have been able to access it before. This is what the Internet is doing.

There is one catastrophe that could hit us tomorrow, and which really would represent a collapse of civilization. It has been possible for over 35 years. That would be a Russian first strike using ICBMs with MIRVed nuclear warheads. That is technologically possible. It is also statistically unlikely.

The Soviet Union had an official ideology of worldwide conquest and empire. It was officially Marxist. Russia does not have that impulse any longer. Russian nationalism is not a philosophy of worldwide empire. Furthermore, the decline of Russian birthrates, which has led to a decline of Russia's population, indicates that Russian nationalism is less aggressive than it was a century ago. The Czar's empire in 1900 was a lot larger than the Russian Federation is today.

Russian nationalism always restricted Communist imperialism. Stalin was a Georgian; Trotsky was an atheist Jew. Stalin was far more nationalistic than Trotsky was. Stalin's vision of socialism in one country contradicted Trotsky's vision of worldwide revolution. Stalin had him assassinated. That was Russian nationalism's response to Communist internationalism. The assassin even used an axe, which was symbolic of Russian nationalism. The axe has been a symbol of Russian nationalism for 1,000 years. Putting it symbolically, the icon and the axe overcame the hammer and the sickle.

The odds against an ICBM first strike on the U.S. by the Russian government are astronomical. Russia is dependent economically on exports to the West. It exports gold, oil, and natural gas. This export market would collapse overnight if the United States economy ever went down because of a Russian first strike. Furthermore, there would be a retaliatory strike against Russia. Here is my point. The very centralized nature of nuclear weaponry is a major restriction on nuclear war. The victims would know who launched the missiles. There would be retaliation. That is why we never had nuclear war.

Now let's look at the free market. The free market is dramatically lowering the cost of producing biological weapons. This cost will continue to decline. That is the effect of Moore's law. This is the effect of the discovery of the genome. We know that there will be tremendous innovations in the field of biology.

Here is the problem. As the price declines, more will be demanded. As the price declines, it will take less money than ever before to produce an apocalyptic biological weapon. At some point, meaning at some price, one genius lunatic will be able to produce the weapon. That is the logic of Moore's law. That is the logic of price competition. That is the logic of free-market capitalism.

There is no way to know who could do this. There is no way to retaliate against anyone who would do this. A government bureaucrat who would launch such an attack on an enemy nation knows that there will be retaliation from the survivors of the attack. But a genius lunatic with the equivalent of what would be a home-brew biological weapons laboratory may be able to avoid detection, and thereby avoid retaliation.

This is the dark side of decentralization. Decentralization is liberating, but it is not messianic. Sin is not overcome by price competition. Decentralization and innovation lower the cost of sin, along with the cost of everything else. It makes sin more price competitive. The law of demand still applies: as the price falls, more is demanded. We must not confuse efficiency with regeneration.

A literary example imported from Islamic mythology: a genie that gets out of a bottle. Some team of researchers accidentally lets loose a killer organism.

Another is a government-run laboratory for biological weapons. This could produce a true weapon of mass destruction. It might get loose, either accidentally or deliberately.

If nanotechnology becomes available, then we are really talking about genies out of bottles. Nanotechnology will enable the little bugs to reproduce and combine. We may be 50 years from this, but I doubt that we are 100 years from this. Think of the old sci-fi movie, The Blob.

These are not science fiction scenarios. These are extension of existing trends, plus a discontinuity. A genie is always a source of discontinuities: three wishes.

It would take only one. That lowers the odds against one.

MOORE'S LAW AND NIHILISM

So, we are told that we are facing three implications of Moore's law. There is an implication with respect to the programs themselves: super-intelligence. This is not true. But it is an implication of a presupposition than knowledge is possible apart from personal responsibility. Second, there are implications for the technology with respect to small-scale biological weapons of mass destruction. These are real threats. That is because knowledge is personal, and personal means ethical. There are evil people out there. Third, there is the possibility of legitimate scientific research that goes awry. This also is a threat.

The problem, once again, is this. The trends are clear. The trends indicate the continuing increase of computerized applications, and dramatically falling prices of these applications. That which today can be produced only by very expensive, government-funded teams of researchers will be possible for small teams of revolutionary nihilists before this century is over.

Evil people will be able to buy powerful machines.

There really are revolutionary nihilists. There really are people who believe that the human race is a cancer on mother nature. They really do believe that the ethically right thing to do would be to reduce the world's population to about 500 million people. This is a religious impulse. This religion is not found in the general population. It is found among highly educated, very intelligent people.

The old protection had been this: the number of people who could pull off something like this was so limited, and the expense of pulling it off was so immense, that the statistical probability of anything like this happening was close to infinitesimal. But as the price of the technology goes down, and the power of the technology goes up, the ability of smaller and smaller groups of revolutionary nihilists to disrupt modern society becomes ever greater.

We are back to the dilemma: the only thing, scientifically speaking, that appears to be able to stop these trends is a discontinuous event that would threaten the collapse of the social order. There would be a breakdown of the division of labor.

But there is more to history than trends. There are discontinuous events that re-shape the trends, diverting them.

And there is always old faithful: the law of diminishing returns. I prefer that one to monsters from the id.

MONSTERS FROM THE ID

It's not just about machines. It's about men.

Bostrom charts various pathways toward achieving superintelligence. Two, discussed briefly, involve the enhancement of human intelligence. In one, stem cells derived from embryos are turned into sperm and eggs, which are combined again to produce successive generations of embryos, and so forth, with the idea of eventually generating people with an average IQ of around 300.

Science fiction.

The other approach involves brain/computer interfaces in which human intelligence is augmented by machine intelligence. Bostrom more or less dismisses both the eugenic and cyborgization pathways as being too clunky and too limited, although he acknowledges that making people smarter either way could help to speed up the process of developing true superintelligence in machines.

Science fiction.

The issue facing us now is not how to integrate man and machine. That is science fiction's scenario. Rather, it is the problem is putting ethical limits on those who use the machines. This means sanctions.

If we trust the state, meaning tenured bureaucrats, to impose these sanctions, we put our faith in a false god.

Then what institutional arrangement can impose sanctions: positive and negative? One that relies on decentralized knowledge. Nothing else is capable of monitoring and shaping the trends.

Here is the problem.

Bostrom argues that it is important to figure out how to control an AI before turning it on, because it will resist attempts to change its final goals once it begins operating. In that case, we'll get only one chance to give the AI the right values and aims. Broadly speaking, Bostrom looks at two ways developers might try to protect humanity from a malevolent superintelligence: capability control methods and motivation selection.

Science fiction. Also, academic jargon. Gobbledygook. Horsefeathers. Somewhere, over the rainbow, way up high.

"In the meantime, Bostrom thinks it safer if research on implementing superintelligent AI advances slowly." What does it matter what he thinks? He doesn't have any power to enforce this. He is not God. No national government can do it. If any national government could do it, it could not do it outside its own borders.

And he urges researchers and their backers to commit to the common good principle: "Superintelligence should be developed only for the benefit of all humanity and in the service of widely shared ethical ideals." A nice sentiment, but given current international and commercial rivalries, the universal adoption of this principle seems unlikely.

Very unlikely.

CONCLUSION

Which leaves us where?

I wish I knew. If I were smarter, I might know.

No one on earth is smart enough to know.

But I do not have to know. I have faith in two things: God and market processes. To those who reject both, I can only say this: the trends are operating. If you trust the government to shape them through knowledge available to government committees, you have only one hope: some great discontinuity.

Meanwhile, Moore's law operates . . . for the present. It is the great constant . . . for the present. It is the great trend . . . for the present.

Perhaps it can be overturned by an unexpected discontinuity. I don't want to think about how discontinuous that event would have to be. Neither do you. I much prefer the law of diminishing returns.

The trend is endogenous. It is self-generated in the economy. The free market economy is huge. The participants are many. The next innovation is inherently unpredictable. Anyone could imagine it. As the price falls, he may be able to implement it.

Previous trends have run out of resources. The law of diminishing returns has prevailed. But economic growth has continued. Creativity and liberty have kept the process growing. Few people want this to stop. Zero economic growth has not been the preference of many people.

Moore's law exhibits a higher rate of sustained growth than anything in history for a longer period of time: 50 years. It looks permanent. But looks can be deceiving.

No state can stop this trend, other than by unleashing a conflagration. This is a free market trend. This process seems to be head8ing toward a great discontinuity: something new under the sun. It is now becoming exponential.

We are back to Kurzweil's article on the law of accelerating returns. We are always back to this article. He thinks the great discontinuity will be what he calls the singularity: a fusion of man and machine.

Science fiction.

Bostrom fears a related discontinuity: the evolution of machines into divinities. I do not. Machines will never be intelligent. Reducing waste by applied algorithms is not the same as intelligence. It is not the same as responsible decision making.

Computer programs are digital. They are not analogical. They are not made in the image of man, any more than man is made in the image of a machine. The world is not a giant clock, and man is not a tiny cog in it. Cogs are not responsible agents. Neither are machines. Cogs do not plan. Neither do machines. Cogs are not purposeful. Neither are machines.

For those theologians -- for this is what they are -- who think that men are cogs in a cosmic machine, who became purposeful by means of an unplanned singularity, the next unplanned singularity is seen as a new evolution. But who will be in charge? Mankind (meaning some men) or machines? They worry about this.

I do not. But I am concerned about a handful of people who may use some technology irresponsibly. A devastating genie may get out of some price-competitive bottle.

Here is my point. The problem is ethics, not digits. No discontinuity will change this . . . other than that final limit on every exponential curve: the irreplaceable scarcity of time.

© 2022 GaryNorth.com, Inc., 2005-2021 All Rights Reserved. Reproduction without permission prohibited.