A document from the CQ Researcher archives: Report Outline State of the Art Today Development of Research The Japanese Challenge Special Focus State of the Art Today Moving to New Level of Computerization About a thousand scientists are at work in laboratories across the country on a project that has been compared to the invention of the printing press, the development of the steam engine, the emergence of human speech, the control of fire and the agricultural revolution. The potentially monumental project is a multifaceted effort to build high-technology computers that are programmed with artificial intelligence—computers that can understand and emulate human speech, perform physical functions and make reasoned judgments. We are at least many decades away from having a computerized society in which thinking machines will be in routine use in virtually everyone's home and office, doing everything from cleaning the house to forming instantaneous strategies for waging nuclear war. Nevertheless, artificial intelligence research has been going on since the 1950s, and in recent years some computers have been equipped with modest powers that suggest reasoning. These are “expert systems” that have been perfected and put to use in business and industry. “We're a long way from [having] talking robots that walk around like Artoo Detoo [in the 1977 movie Star Wars],” said Lou Robinson, editor of the newsletter Artificial Intelligence Report. “But AI is coming out of the laboratory and being put into commercial uses.” Artificial intelligence (AI) systems differ from conventional computer systems conceptually and functionally. Present-day computers are designed to do one basic job: process data. To do so, they add, subtract, divide, multiply, move and compare numerical information in a serial fashion. In order to complete these tasks conventional computers must be programmed with step-by-step instructions. AI systems can process information much more rapidly than conventional computers and can comprehend new types of programming languages that use symbols rather than numbers. This combination of rapid processing ability and the use of symbols such as words and phrases allow artificial intelligence machines to process many pieces of information simultaneously. In doing so, AI programs compare facts and rules to make deductive, reasoned responses. This “gives you a very sophisticated tool for decision support and simulating situations,” said Howard Jacobson, executive vice president of Jacobson Corp., a Newport Beach, Calif., high-technology consulting firm. From the Laboratory to Commercial Use Artificial intelligence is evolving into a big business. From virtually nothing three years ago, sales of various types of artificial intelligence systems software and hardware have grown into a $150-million-a-year enterprise today. There are forecasts that the marketplace will grow by about 50 percent a year for the next five years. All of the large computer companies, including IBM, Digital Equipment Corp., Texas Instruments and Hewlett-Packard Co., are moving into the field. And more than two dozen large U.S. corporations—companies such as ITT, General Electric, Schlumberger, Litton, and Hughes Aircraft—have in recent years set up their own AI research facilities. The Department of Defense is heavily involved in AI research. The Pentagon's Defense Advanced Research Projects Agency is looking into many military uses for artificial intelligence, including remote-control tanks, “smart” land mines that can seek out targets, robotic surveillance systems and elements of the space-based Strategic Defense Initiative—President Reagan's “Star Wars” plan. Research is going on elsewhere in the world, including Britain, France and West Germany. Three years ago Japan launched a billion-dollar industry-government cooperative effort to produce the next generation of AI computers by 1992. The large-scale Pentagon effort, Japan's move into the field and the growth of the commercial expert system software business have combined to put artificial intelligence on the computer map. “Artificial Intelligence is Here,” announced the headline over Business Week magazine's cover story on July 9, 1984. The sub-headline read: “Computers that Mimic Human Reasoning are Already at Work.” But there is a marked difference of opinion in the relatively small AI community about the current technology. Some of the top AI researchers, including Marvin Minsky of the Massachusetts Institute of Technology and Roger Schank of Yale University, say that expert systems only scratch the surface of the capability of artificial intelligence, and that scientists are a very long way from developing computers that can exhibit anything approximating human intelligence. “The term ‘expert system’ basically means nothing these days,” Schank said. “Any kind of flashy software is called expert systems.” Ramesh Patil, an assistant professor of computer science at MIT, cautioned that artificial intelligence “actually is a very slow-moving field. It's taken 30 years to have a first impact.” In commenting on rosy predictions for AI, John Seely Brown of the Xerox Palo Alto Research Center said: “You've got to separate the science from the hype.” Expert Systems in Medicine and Finance No one is claiming that expert systems can even remotely emulate human intelligence, “Most expert systems today don't come anywhere close to being intelligent like a person,” said Jim Reggia, assistant professor of computer science at the University of Maryland. “They store associated knowledge and process it in various ways. You put in certain features of a problem and the system comes out with a possible diagnosis. But it doesn't know what the symbols mean in the way a person does.” Each expert system operates only in one very specific area (a “domain” in AI parlance), such as diagnosing lung or skin problems. Expert systems are programmed with what is known as a “knowledge base”—a series of general rules and facts typically in the form of “if-then” statements. The facts come from experts in the field and are programmed by a computer specialist called a knowledge engineer. A fact might be worded; “If a patient has a red rash, it may be caused by exposure to poison ivy.” A rule might say: “If it is a poison ivy reaction, apply calamine lotion and advise patient not to scratch area.” The system is equipped with an “inference engine” that can “answer” questions by using its knowledge base to draw conclusions. Expert systems use “natural language” (everyday grammatical English) rather than computerese to communicate, and can ask questions until they receive enough information to reach a conclusion. Expert systems do not exactly emulate human intelligence, but they are still considered a branch of artificial intelligence. And they differ significantly from the software that runs conventional computers. “The thing the expert system has is the ability to take the logic in the head of the expert and translate it into a knowledge-based software system, rather than [a conventional computer's] data-based system,” explained Jack Collins, a vice president of Software Architecture & Engineering, an AI firm in Arlington, Va. “Knowledge-based systems can deal with uncertainty. They can, in effect, explain why they came out with the advice they came out with.” Collins' company went into business four years ago when the expert systems field was in its commercial infancy. Since then, the business has expanded rapidly and venture capital has begun flowing into the field. “Last year, the number of companies entering the field tripled from 25 to 75,” Jacobson said. “It's similar to the personal computer market in 1977 when people didn't know what a personal computer was….” Computer industry analysts say that the expert systems marketplace will grow very rapidly. They expect spending on AI research and products to approach $2.5 billion by 1993, up from $200 million in 1984. About 50 expert systems are in commercial use today, and about 1,000 more are being developed. Today's expert systems focus on three main areas: medical science, finance and manufacturing. Medical diagnostic expert systems are being used in hospitals throughout the country. Among the most sophisticated is INTERNIST/CADUCEUS, developed at the University of Pittsburgh by Jack Meyers, a physician, and Harry Pople, a computer scientist. The system, which still is in the testing stage, is capable of diagnosing about 80 percent of all internal medical problems. Some have predicted that the system not only will help experienced internists diagnose illnesses, but one day will be commonly used in doctors' offices, at health clinics and even in space travel. Stanford University researchers in the mid-1970s developed  an expert system called MYCIN that diagnoses blood and meningitis infections. Physicians and computer scientists at the California State University at San Francisco have developed PUFF, an expert system that interprets respiratory and pulmonary problems. That system, which is being used clinically, “is doing some very, very complex pulmonary diagnostics and is every bit as good as the best expert in that domain,” Lou Robinson said. “It is capable of reaching conclusions that not only identify complex diseases, but can also suggest potential medication. …And it's even running on a small Apple computer.” Financial institutions, including banks, insurance companies and brokerage houses, have recently begun using expert systems that, among other things, process loan applications and analyze investment portfolios. These systems respond to questions such as “What should I do next?” with answers in the form of complete sentences. Even more advanced systems are being tested and some analysts believe that in the not-too-distant future expert systems will be in widespread use helping bank loan officers, stockbrokers and insurance underwriters perform hundreds of everyday tasks. Jacobson predicted that in the next year “10 percent of the major banks will be actively using these kinds of expert systems.” In manufacturing, about a dozen expert systems are “out there working and saving people lots of money,” according to Randall Davis, professor of artificial intelligence at MIT. These include XCON, which was developed by researchers at Digital Equipment Corp. to help its technicians choose components for custom computer installations. Jacobson explained how XCON is used: “Let's say a customer wants to buy a range of computing capability. The [XCON] system creates an arrangement of computer facilities to be installed—it's called configuration planning. It's saving DEC [a computer manufacturer] millions of dollars in what would have been spent in solving the needs of its clients.” General Electric Co. computer scientists invented an expert system called DELTA/CATS-1, which has been programmed with the knowledge of the company's most experienced specialist in diesel locomotive repair. The system is being used by GE maintenance workers in remote areas to fix mechanical problems. “The system is nearly as effective and certainly a lot cheaper than flying the company's senior (human) expert all over the country,” said M. Mitchell Waldrop in Science magazine. Potential Application to Industrial Robotics Hundreds of American corporations use industrial robots to do hazardous, difficult, repetitive and routine jobs. But there is one drawback with most of the robots in use today: They can do only what they are specifically programmed to do. Each movement's trajectory, location and coordinates must be programmed in sequence. If something is not exactly where a robot has been programmed to find it, the robot cannot perform its job. A robot, for example, cannot move a car body on an assembly line if any part is in the wrong position. To deal with this limitation, computer scientists are working to integrate artificial intelligence software and robotics hardware. Eventually robots should be able to react to the unexpected by using video cameras to “see,” sensing devices to “feel,” and artificial intelligence software systems to “think” of ways to react. “AI will be the brains behind the robot,” Jacobson said. Some time in the future, Professor Patil said, “you may tell [a robot], ‘pick this thing up and move over and go somewhere else.’” Robots capable of following those types of orders are a long way from fruition. But there are some types of AI-fueled automated manufacturing techniques that may be closer at hand. The National Bureau of Standards, for example, is building a prototypal automated manufacturing facility that uses expert systems and AI programming languages to operate machine tools, robots and perform other assembly-line functions. Another NBS project enables a worker to use a computer terminal to instruct a robot to manufacture a specific machine part. “You can say [to the machine], ‘I want a cylinder here, a groove here, a pocket there’ and it just generates it and cuts the metal for you,” said Jim Albus, who heads the industrial systems division of NBS's Center for Manufacturing Engineering. “It would make it possible to bring orders in the door in the morning and bring finished parts out the door in the evening.” Albus said that such systems could bring “productivity improvements of hundreds of percent,” as well as “change the whole concept of how you handle spare parts and how you schedule production.” Making parts in a short period of time eliminates the need for large inventories, he said. “If you could get rid of the inventory sitting around on the factory floor waiting to be worked on, you could cut the price by some large fraction like 50 percent,” Albus said. Go to top Development of Research Early Attempts at Building Logic Machines Machines designed to lighten physical and mental labors have intrigued scientists, mathematicians and philosophers from the earliest days of civilization. The abacus, for example, was developed in ancient China, and also in Egypt, as a rudimentary counting device. The first calculating machines were invented in the 17th century by French scientist Blaise Pascal and separately by German philosopher-mathematician Gottfried Wilhelm Leibnitz. Leibnitz, a pioneer in the fields of symbolic logic and differential calculus, built a machine that could add, subtract, multiply, divide and extract square roots. Two centuries later, in 1823, British mathematician Charles Babbage, who is regarded as the father of the modern digital computer, conceived of a steam-powered machine which he called an “analytical engine.” Babbage believed his machine would do away with the “drudgery of thinking,” and he designed it to calculate and print such things as logarithm tables. Babbage spent his personal fortune and a large amount of public money on the project, but his machine never was built. Various types of calculating machines were invented in the late 1800s and during the first half of this century. Herman Hollerith, a statistician from Buffalo, N.Y., built an automatic tabulating machine in 1889 that used perforated cards. The device was used in compiling the 1890 U.S. census. Hollerith's Tabulating Machine Co. was sold and then merged with several other companies in 1924 to form International Business Machines (IBM). Alan Turing, an English logician at Cambridge University, published a scientific paper, “On Computable Numbers,” in 1937 in which he sketched out the basic technology and predicted the capabilities of today's computers. During World War II Turing helped develop a computer-like machine called Colossus that broke the German military code. In 1947 he wrote a paper called “Intelligent Memory,” in which he discussed how machines “might be made to show intelligent behavior.” Al's Emergence as Distinct Research Area Artificial intelligence began to emerge as a distinct field of computer science at about the same time the first breakthroughs in computer technology were made immediately after the end of World War II. That era saw the production of the first electronic computer, the Electronic Numerical Integrator and Calculator (ENIAC), at the University of Pennsylvania. The huge machine, which was built in 1946, filled a large room, required 18,000 vacuum tubes and needed 140,000 watts of electricity—enough power to drive a locomotive. By the mid-1950s a small group of computer scientists in laboratories in different parts of the country was looking seriously at the complex issues of artificial intelligence. “It took enormous foresight, even a century after Babbage, to imagine that large, crude vacuum-tube wonders of the early 1950s, the first generation of computers, might do anything more interesting than calculate bomb trajectories,” AI experts Edward A. Feigenbaum and Pamela McCorduck noted. They said the key to making “artificial intelligence come alive as a science” was “the perception that the computer was badly misnamed.” The word ‘computer,’ Feigenbaum and McCorduck said, “implies only counting and calculating,” but computers actually are “capable of manipulating any sort of symbol [authors' emphasis].” The first symbols with which early AI researchers experimented with were in programs simulating chess and checker games and proving theorems of plane geometry and logic. The first working AI machine, called Logic Theorist, was developed in 1956 by Allen Newell and Herbert Simon of Carnegie-Mellon University in Pittsburgh. That machine, which some consider to be the first expert system, was programmed to process symbols. All other computers used numbers, Newell and Simon's machine used logical operations to prove mathematical statements, including theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica. That year, 1956, also marked the official christening of the field of artificial intelligence at a summer workshop-conference at Dartmouth College. The meeting was convened by mathematicians John McCarthy of Dartmouth and Marvin Minsky of Harvard University (now of MIT), and information specialists Nathaniel Rochester of IBM and Claude Shannon of AT&T's Bell Laboratories. As an MIT student in 1937 Shannon had written a paper that spelled out for the first time the possibility of linking symbolic logic and binary mathematics with electronic circuitry. The Dartmouth conference focused on the possibility that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Newell and Simon demonstrated the Logic Theorist, and the conferees adopted the term “artificial intelligence” at the urging of McCarthy, who later co-founded the Artificial Intelligence Laboratory at MIT and now teaches at Stanford. McCarthy, Minsky, Newell and Simon become influential AI researchers and theorists. “These four men …directed most of the significant AI research in the United States for the next 20 years,” one analyst commented, “and the schools they settled at—MIT, Stanford, and Carnegie-Mellon—continue to dominate the field today.” Lisp: Programming in Symbols and Words Throughout the 1950s and 1960s AI researchers worked on solving two complex questions: how the human mind develops knowledge and how to put human knowledge to use in computers. While researchers are still working on the former question, they have made some progress on developing methods that permit computers to understand and work with ideas and statements of logic. One of the key accomplishments in AI research was John McCarthy's development in 1957 of a programming language called LISP (an acronym for List Processor). Conventional computer languages, such as Fortran, Cobol, Basic and Pascal, are designed primarily for mathematical calculations. They perform one specific task as quickly as possible. LISP, on the other hand, manipulates symbols, such as words, phrases and geometrical figures. LISP consists of separate decision-making rules and facts, and reaches conclusions without having to be programmed with the specific steps. “With conventional computers, the programmer gives step-by-step procedures,” explained Jim Reggia. “AI software procedures are already put in. You give the computer knowledge and the software processes the information automatically.” Reggia used an everyday analogy—taking a taxicab to an airport—to illustrate the difference between LISP and conventional computer languages. With LISP, he said, “you just tell the driver you want to go to the airport.” With conventional computers you “tell the driver to turn the key, put his foot on the accelerator, make a left turn at the light, etc.” By linking lists of rules and facts together, LISP enables a computer to make inferences using logical deductive reasoning. For example, if a computer equipped with LISP is given the facts that (1) Ronald Reagan became president of the United States in 1981 and (2) all 20th-century U.S. presidents lived in the White House, the computer can infer that Ronald Reagan lived in the White House. “The foremost claim” of this type of symbolic processing, Tom Alexander of Fortune magazine noted, “is that it extends the power of electronic processing beyond the domain of quantities, which has been computers' primary domain until now, into the domain of qualities. This facilitates the machines' ability to deal with tasks such as understanding human language.” The first expert system, DENDRAL, was developed in the late 1960s by an interdisciplinary team at Stanford working under Feigenbaum, geneticist Joshua Lederberg and physical chemist Carl Djerassi. DENDRAL, which is programmed with chemical data, can help identify unknown compounds by analyzing their molecules. DENDRAL and its successor, GENOA, are in widespread use in university and commercial organic chemistry laboratories throughout the world. DENDRAL's ability to identify chemicals “exceeds human capability, including that of its designers,” Feigenbuam and McCorduck said. Recent Advances in Computer Capability Even though LISP has been in existence for nearly three decades, it has been in widespread use only in recent years. The main reason is that the computers available in the 1950s, 1960s and early 1970s did not have the capacity to run the vastly complex and extensive AI programs, which require enormous amounts of computer power. Then, in the mid-1970s, the cost of computer hardware began dropping rapidly. This enabled AI researchers for the first time to accumulate and experiment with large banks of computer memory systems. Researchers at MIT, armed with this greatly increased memory capacity, developed a special system of circuits to process LISP commands quickly and efficiently. Today, LISP systems even are available for use with personal computers. It is generally agreed that the advances in computer hardware, rather than any dramatic breakthroughs in developing software that emulates intelligence, are responsible for the recent interest in the entire field of artificial intelligence. The availability of “very powerful big computers,” said Ramesh Patil, “means we can now do a lot of thinking with that computer power, along with doing whatever controlling we were doing before. Before this we didn't have enough computers to spare to do anything but just follow this path. Now we can think while following the path. That's what has allowed more AI technology to be deliverable.” Go to top The Japanese Challenge Extent of Japan's 10-Year Research Effort On April 14, 1982, Japan's Ministry of International Trade and Industry announced the formation of an industry-government research team whose mission is to design a revolutionary new fifth generation of computers equipped with artificial intelligence. The Institute for New Generation Computer Technology (ICOT), headed by Kazuhiro Fuchi, is working on a 10-year effort, Feigenbaum and McCorduck said, “to develop computers for the 1990s and beyond—intelligent [authors' emphasis] computers that will be able to converse with humans in natural language and understand speech and pictures. These will be computers that can learn, associate, make inferences, make decisions, and otherwise behave in ways we have always considered the exclusive province of human reason.” ICOT engineers, on loan from Japanese government laboratories and computer companies, are working to “develop both hardware and software that would take computer technology to, let's say, two steps above where it is now in terms of magnitude,” Howard Jacobson said. The fifth generation of computers, he said, will be capable of computing “several hundred million information processing statements per second.” ICOT's goals include designing expert systems capable of working with encyclopedia-length knowledge bases, translation machines that can translate documents from English to Japanese and back, and personal computers capable of performing high levels of artificial intelligence tasks. Japan's economic and government leaders have made this significant commitment to AI research because they believe that artificial intelligence computers eventually will become the key element in the world economy. Japan has, in effect, been forced to rely heavily on producing technologically advanced exports because of its acute shortage of natural resources. Japan's 121 million people live on a land area of 143,000 square miles—about the size of Montana. But only about 20 percent of the land is arable and the country must import much of its needed raw materials. It imports more oil, coal, iron ore, cotton, wool and lumber than any other nation. To offset these costs, Japan exports large amounts of technologically advanced industrial goods. The Japanese, moreover, are looking to the day when information, knowledge and intelligence will replace land, labor and capital as the economic currency of the world. “This isn't to say that the traditional forms of wealth will be unimportant,” Feigenbaum and McCorduck noted. “Humans must eat, and they use up energy, and they like manufactured goods. But in the control [authors' emphasis] of all these processes will reside a new form of power which will consist of facts, skills, codified experience, large amounts of easily obtained data, all accessible in fast, powerful ways to anybody who wants it—scholar, manager, policy maker, professional, or ordinary citizen. And it will be for sale,” Japan plans to be a worldwide vendor of artificial intelligence, something that will give that nation power over many types of industries throughout the world. As Jacobson put it: “The Japanese feel that their future lies with the information industry, and artificial intelligence is the cornerstone of the future of the industry.” U.S. Response: Corporate-University Ties Not surprisingly, other industrialized nations have reacted strongly to the Japanese move into advanced AI research. “ICOT threw the modern, technological world into a ‘Sputnik’ reaction,” Jacobson said, “which was, ‘How are we going to keep up with the Japanese?’” Soon after the formation of ICOT, Britain began a fifth generation research project, called the Alvey Programme. France and West Germany also significantly stepped up their research efforts. In the United States, two consortia of computer industry groups were set up to boost advanced AI research. The first, the Semiconductor Research Cooperative, based at Research Triangle Park, N.C., is an industry-supported program that provides research grants to university AI facilities in the hope of spurring advances in the technology. The second, Microelectronics and Computer Technology Corp. (MCC), was formed in August 1982 in Austin, Texas, with the goal of spending about $1 billion on a 10-year “American Fifth Generation” project. By the summer of 1985, 20 large American computer companies—including Control Data Corp., Digitial Equipment Corp., Honeywell Inc. and Rockwell International Corp.—had joined MCC, which is headed by Bobby Ray Inman, a retired U.S. Navy admiral and a former deputy director of the Central Intelligence Agency. His group regards ICOT as a direct competitive threat to American world leadership in computer technology, and considers itself in direct competition with the Japanese. “We've pooled resources against the challenge,” Inman said. “One of the benefits of the Japanese alert is that it has gotten U.S. businesses to think about planning for long-range research.” While some Americans worry that Japan will dominate the artificial intelligence marketplace, others believe that there should be international cooperation, not competition. The recent upswing in AI research in both countries “presents a field of options in cooperative development projects,” said Jacobson, who has been working with American and Japanese government and corporate AI researchers to bring about cooperative projects. One factor favoring cooperation, Jacobson said, is that “in technology it's impossible for anyone to ever dominate because it's always changing so rapidly. It's not like an oil cartel where you have one commodity that stays the same for 150 years. It's a whole different marketplace.” Several U.S.-Japanese cooperative AI projects are scheduled to be disclosed Aug. 23 at the 1985 International Joint Conference on Artificial Intelligence at the University of California, Los Angeles. The Defense Advanced Research Projects Agency (DARPA) has been working on artificial intelligence research since 1961, but until ICOT was formed in 1982 the agency had spent only somewhat more than $100 million on AI. The Japanese formation of ICOT, said M. Mitchell Waldrop in Science magazine, “finally gave DARPA the leverage it needed to break loose Pentagon funding for its 10-year, $1-billion ‘Strategic Computing’ program.” That program, which got under way in November 1983, has a primary goal of advancing “machine intelligence technology across a broad front to maintain with assurance the U.S. technical lead in advanced computer technology through the next decade,” according to the agency's director, Robert S. Cooper. It is funding AI programs in speech recognition and understanding, natural language, vision comprehension systems and advanced expert systems. It also is trying to foster closer cooperation between university and military AI research efforts and is sponsoring university graduate programs in the hope of increasing the number of artificial intelligence scientists and engineers. Aside from those goals, the Strategic Computing program is developing some AI military hardware. This includes a computer-operated reconnaissance land vehicle equipped with television cameras, laser radar and acoustic sensors, and an expert system called Naval Battle Management to help naval commanders make decisions in the heat of battle. The Strategic Computing program has funneled millions of dollars into university and corporate AI research facilities. The autonomous land vehicle, for example, is being developed by Martin Marietta Aerospace's advanced automation division at Denver, Colo. Work on battle management expert systems is being done by government scientists and researchers at Carnegie-Mellon University in Pittsburgh and at Texas Instruments Inc. There has been some concern expressed in the AI community that the outpouring of DARPA and MCC research dollars—along with the emphasis on the commercialization of expert systems—has shifted attention away from the basic research that remains to be done to unlock the potential of artificial intelligence. The main worry is that universities are losing AI researchers to industry. “There's no question there is a shortage of university people,” said Larry Harris, president of Artificial Intelligence Corp. “The quality of people who left was high, and they have left a vacuum.” Others are not concerned about the university “brain drain” and believe that the situation could even stimulate AI research. “The departure of applications-oriented people from the universities to businesses may be quite beneficial to AI,” said Nils Nillson of Stanford University's computer science department. “It brings those with applications interest into more intensive confrontation with real problems, and it leaves at the universities a higher concentration of people who are mainly interested in developing the basic science of AI.” Coal of Giving Computers Ability to Think One fundamental question about artificial intelligence remains unanswered: Is it possible for computers—for machines—actually to think in the same manner as the human mind? There is no doubt that computers can be programmed to make inferences. But it has yet to be proven that any inanimate object can be imbued with human knowledge and the ability to learn. A computer, after all, has no intrinsic intelligence. It is a machine that manipulates symbols that it recognizes, but does not understand the meaning of the symbols it processes. Computers, moreover, cannot emulate the many and varied processes of the human mind—which Alexander described as “the rich associations, metaphors and generalizations that language evokes in people and that constitute the essence of meaning and thought.” Thinking, he said, “consists less of logic and recognizing symbols than it does of mental images and analogies—things no one has been able to define in terms computers can grasp.” Today's expert systems, Alexander maintained, exhibit an “empty mimicry” of human intelligence and have “the limited repertoire of a clockwork doll rather than a respectable simulation of human intellect.” No one has yet figured out how to program a computer with common sense, or with what has been called the “Aha!” factor—the ability humans have of solving a problem seemingly spontaneously without using any apparent logic. Nevertheless, researchers believe that one day it may be possible to program a computer with something approaching both common sense and intuitive capabilities. “Common sense is a question of how much you know about a domain,” said Patil of MIT. “The kind of reasoning that goes on in making common-sense reasoning we are starting to get a very good handle on.” Nevertheless, he said, “building a program that would have as much common sense as you and me is still out of reach.” As for the “Aha!” factor, Patil said that computers can “discover something to be true that we hadn't expected,” but that they do so “by brute force manipulation of formulas” or simply “by proving theorems in a very different way.” It's “difficult” to say whether or not a computer “can realize that it has stumbled on something that's ‘Aha!,’” he said. “But it may actually discover those things by just being methodical.” The only computers capable of thinking, talking and walking today can be found in science fiction books and movies. But many artificial intelligence researchers believe that it is just a matter of time before thinking machines are as commonplace as personal computers. Go to top Bibliography Books Berry, Adrian, The Super-Intelligent Machine: An Electronic Odyssey, Jonathan Cape. 1983. Feigenbaum, Edward A., and Pamela McCorduck, The Fifth Generation, Addison-Wesley, 1983. McCorduck, Pamela, Machines Who Think, Freeman, 1983. Rose, Frank, Into the Heart of the Mind: An American Quest for Artificial Intelligence, Harper & Row, 1984. Schank, Roger C., and Peter G. Childress, The Cognitive Computer: On Language, Learning and Artificial Intelligence, Addison-Wesley, 1984. Simon, Herbert A., The Sciences of the Artificial, 2nd ed., MIT Press, 1981. Winston, Patrick H., Artificial Intelligence, 2nd ed., Addison-Wesley, 1977. Articles Ahl, David H., “Progress on the Project: An Interview with Dr. Kazuhiro Fuchi,” Creative Computing, August 1984. Al Magazine, selected issues. Alexander, Tom, “Why Computers Can't Outthink the Experts,” Fortune, Aug. 20, 1984; “The Next Revolution in Computer Programming,” Fortune, Oct. 29, 1984. “Artificial Intelligence is Here,” Business Week, July 9, 1984. Artificial Intelligence Report, selected issues. Hapgood, Fred, “Experts to a Point,” The Atlantic, February 1985. Hoban, Phoebe, “The Brain Race,” Omni, June 1985. Kinnucan, Paul, “Artificial Intelligence,” High Technology, November-December 1982. Lemley, Brad, “Artificial Expertise: Intelligent Software for Problem Solving,” PC Magazine, April 16, 1985. Lenat, Douglas B., “Computer Software for Intelligent Systems,” Scientific American, September 1984. Marsh, Alton K., “NASA to Demonstrate Artificial Intelligence in Flight Operations,” Aviation Week & Space Technology, Sept. 17, 1984. Stamps, David, “Expert Systems: Software Gets Smart—But Can It Think?” Publishers Weekly, Sept. 21, 1984. Waldrop, M. Mitchell, “The Necessity of Knowledge,” Science, March 23, 1984. Reports and Studies Editorial Research Reports: “The Robot Revolution,” 1982 Vol. I, p. 345; “The Computer Age,” 1981 Vol. I. p. 105; “Approach to Thinking Machines,” 1962 Vol. II. p. 537. U.S. Department of Defense, Defense Advanced Research Projects Agency, “Strategic Computing: First Annual Report.” February 1985. Go to top Footnotes Go to top Special Focus It is generally agreed that electronic computers have gone through four “generations” since their beginnings in the mid-1940s. The first four generations basically processed information using arithmetical programming languages. The fifth generation, which now is being developed, will consist of machines that process symbols as well as numbers. This will allow the computers to use reasoning, inference and deduction to simulate human thought. The five generations are: First generation. Extremely large (room-sized), vacuum-powered computers of the mid-1940s and early 1950s. Second generation. Smaller-sized, yet faster, computers powered by transistors in mid-1950s. Third generation. Computers equipped with integrated circuits (hundreds of transistors wired together on individual silicon semiconductor chips) in early 1960s. Fourth generation. Very large-scale integrated circuit computers using hundreds of thousands of chips in the 1980s. Fifth generation. Knowledge information-processing systems capable of extremely rapid processing (chips with as many as 10 million transistors) under development in Europe, the United States and Japan. Scientists in the Soviet Union are known to be involved in research programs dealing with artificial intelligence, which is known there as “cybernetics.” But Western specialists believe that the work is far less advanced in the Soviet Union than it is in the United States. “They're about 10 to 15 years behind us,” an American analyst said of the Russians. He explained: “They're so far behind because they lack the massive computing power” that is available to artificial intelligence researchers in the United States, Japan and Western Europe. Lou Robinson, the editor of Artificial Intelligence Report, predicts that the following items will be among the first types of consumer goods to make use of artificial intelligence: Diagnostic maintenance systems for products such as lawn mowers or motorcycles. “You'll buy something and get a maintenance diagnostic system that you can plug into your home computer” to help you determine mechanical problems. Translation machines. “I envision the day when you could have translation machines every bit as prominent as, say, photocopy machines. I can see taking a document and feeding it into a machine and having it translate it into German. Projecting that even further, I see a day when you can go to a translation machine and put your document down and dial one of seven languages.” Housecleaning robots. “I see robotics getting sophisticated to the point where you can actually use them in a home environment for cleaning.” Go to top
Document APA Citation
Leepson, M. (1985). Artificial intelligence. Editorial research reports 1985 (Vol. II). http://library.cqpress.com/cqresearcher/cqresrre1985081600
Document ID: cqresrre1985081600
Document URL: http://library.cqpress.com/cqresearcher/cqresrre1985081600
|
|