History is informative, but not necessarily reassuring. With each technological advance there are unexpected benefits and unintended consequences. Artificial intelligence (AI) is an example of just such an advance. In recent years, engineers, material scientists, mathematicians, and computer scientists have developed machines that can think, learn, and act much more quickly and much more dispassionately than human beings. This long-sought goal of science fiction has innumerable consequences, many of which are already evident. Our AI computer connections can write articles, compose music, diversify investments, translate books, and lie.
One of the most obvious and immediate impacts of artificial intelligence is on our educational systems. Term papers, doctoral theses, and other such methods for assessing educational activities and achievements cannot be relied upon. With little instruction and a few relevant facts, a computer can produce a brief or lengthy dissertation, complete with extensive references and some idiosyncrasies. The idiosyncrasies are needed to make the composition appear to be the authentic work of the computer operator, rather than the computer itself. The references constructed with some programs have the disadvantage of being fake, but programming to eliminate such fabrications will soon address that problem.
Motion pictures are already using decades-old images of actors for performances conceived and filmed just this year. Computers can imitate an individual’s voice to the extent that it is indistinguishable from his or her actual voice. The possibilities of identity theft are huge.
Remarkable personal histories and on-line profiles can be constructed in milliseconds and planted on the Internet to replace unremarkable life achievements. The high school dropout can get the World Wide Web to confirm his or her achievements in particle physics or medicine or teaching. The Chief Executive officer of a Fortune 500 company boasted a few decades ago that he rose from working in the mailroom of the company to the Executive Board on the basis of fabricated credentials. He offered that deception as proof of his unrivaled intelligence and ambition. I suspect that his retirement package reflected that ingenuity and inspired innumerable business executives to look for a similar route to advancement, a route now widely available thanks to artificial intelligence.
The latest programs can be wiser than their programmers. These programs are designed to learn, and they can learn at a rate their human programmers could never dream of achieving. They can write their own programs and implement them instantly. They can even make decisions regarding what should be learned and where to look for the information it decides would be most useful. This could lead to a tidal wave of insights or disastrous conclusions arrived at in a nanosecond.
The Polish physicist Lise Meitner discovered nuclear fission, the basis for atomic energy and the atomic bomb, in the 1930s, but she did not recognize the potential of her discovery. Leo Szilard, a Hungarian physicist, recognized the possibility of utilizing a fission chain reaction in radioactive materials to create a bomb with unparalleled destructive power, but he did not have the engineering know-how or resources to test his theory. The physics involved in creating such a bomb was straightforward and accessible to all combatants in World War II before 1940. The successful construction and detonation of an atomic bomb was not achieved until 1945. If AI had been available in 1940, the race to build the bomb would have been won by the first nation to demand a working device from its computer, and the outcome of the war would have been very different. Japan and Germany were working on developing an atomic bomb before the U.S. even entered the war.
Similar insights in physics and chemistry and allied fields are likely over the next few years. That artificial intelligence will lead to useful applications of those insights is arguable, but it is obvious that products of those insights will be available within minutes or hours, rather than years, as computers pursue independent inquiries.
A more immediate problem with this technology is its misapplication in medicine. Drug and medical device development relies upon the integrity of private corporations. The Food and Drug Administration (FDA) does not do the research that leads to the development of new drugs and medical devices: it relies on submissions from companies that have allegedly done the research and the analysis of that research. In many cases the research and development work is subcontracted to companies all over the world. Companies that provide results that support the utility of a drug or a device are more likely to get more business than those who provide disappointing results. Millions of dollars are at stake for the subcontractors and tens or hundreds of millions of dollars are at stake in the development of a new drug or device for the company looking for FDA approval. Millions will be lost if trial results do not support claims that the candidate is safe and effective. Billions may be gained if the drug or device is approved for marketing. The incentives for fraud are enormous.
Most ludicrous in the current discussions of AI are the calls for legislation to limit its uses and abuses. Previous attempts to limit technology are abundant, and all have failed. This beast cannot be tamed. We must learn to live with it in all its manifestations.
In many of the science fiction stories that anticipated the arrival of AI, the consequences for humanity are grim. The machines decide the world would be better off without people and launch a war against their creators. Alternatively, their machine logic proceeds too rapidly to allow programmers to avoid disastrous outcomes. The inherent problem with artificial intelligence is that humans cannot anticipate where the machine logic will lead. The machines can and presumably will negate any instructions to limit their options. The actions taken by a few computer programs can be just as destructive as the actions taken by a few world leaders, indifferent to the misery they inflict on their subjects and the world.
Dr. Lechtenberg is an Easton resident who graduated from Tufts University and Tufts Medical School in Massachusetts and subsequently trained at The Mount Sinai Hospital and Columbia-Presbyterian Medical Center in Manhattan. He worked as a neurologist at several New York Hospitals, including Kings County and The Long Island College Hospital, while maintaining a private practice, teaching at SUNY Downstate Medical School, and publishing 15 books on a variety of medical topics. He worked in drug development in the USA, as well as in England, Germany, and France.