Algorithms and Artificial Intelligence

July 6, 2018 – Volume 28, Issue 24
Are they being used in harmful ways? By Patrick Marshall


A humanoid robot named Nina is displayed at the University of Grenoble in France in November 2017 (Cover: AFP/Getty Images/Jean-Pierre Clatot)  
A humanoid robot named Nina is displayed at the University of Grenoble in France in November 2017. Artificial intelligence (AI) enables Nina to “learn” as she encounters different situations. Critics of the technology worry that robots could someday be independent of their human creators, but AI's defenders say such fears are overblown. (Cover: AFP/Getty Images/Jean-Pierre Clatot)

Algorithms increasingly shape modern life, helping Wall Street to decide stock trades, Netflix to recommend movies and judges to dispense justice. But critics say algorithms — the seemingly inscrutable computational tools that help give artificial intelligence (AI) the ability to “think” and “learn” — can lead to skewed results and sometimes social harm. AI might help mortgage companies decide whom to lend to, but qualified borrowers can be rejected if the underlying algorithms are faulty. Companies might use AI to screen job applicants, but skilled talent can be turned away if the algorithms reflect racial or gender bias. Moreover, the use of algorithms is raising difficult questions about who — if anyone — is liable when AI results in injury. The technology is even stirring fears of an AI apocalypse in which computers become so powerful and autonomous that they threaten humankind. Some experts want the federal government to strictly regulate AI to ensure it is not misused, but critics fear more rules would stifle the technology.

Go to top


When Jason Doss lost his job as an autoworker several years ago, his problems were just beginning.

The state of Michigan improperly seized more than $14,000 from his paychecks between 2015 and 2017, claiming that Doss had fraudulently collected unemployment in 2011. Worse, the state's Unemployment Insurance Agency said he owed a $62,000 penalty. Doss insisted he was innocent.1

Turns out he was not alone.

In early January, Michigan conceded that Doss and some 34,000 other residents had been improperly accused of fraud because of a faulty algorithm the state had installed to monitor unemployment claims. The state Legislature created a fund to compensate the victims. Michigan also halted all collection activities against those who could show they had been wrongfully accused.2

For critics, the incident is a dramatic example of all that can go wrong when computers, and not people, are making decisions. The state began using the algorithm a year after Michigan's unemployment agency laid off one-third of its workforce. An internal state review discovered a 93 percent computer error rate.

“Government by spreadsheet does not work,” said Jennifer Lord, a Michigan attorney who represented the workers at no charge.3

But some experts argue that such incidents are exceptions and that as algorithm-driven artificial intelligence (AI) improves, and as the public comes to understand it better, everyone will benefit.

A U.S. Customs and Border Protection officer uses facial-recognition technology at a security checkpoint at Miami International Airport on Feb. 27, 2018 (Getty Images/Joe Raedle)  
A U.S. Customs and Border Protection officer uses facial-recognition technology at a security checkpoint at Miami International Airport on Feb. 27, 2018. Advocates see facial recognition, which relies on algorithms to make identifications, as a powerful tool, but critics say the technology is unreliable. (Getty Images/Joe Raedle)

“Artificial intelligence is one of the hottest, least understood and most debated technological breakthroughs in modern times,” wrote Lili Cheng, vice president of Microsoft AI & Research, in January. “AI can truly help solve some of the world's most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won't be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans and for humanity.”4

At their simplest, algorithms are a sequence of mathematical instructions for solving a problem. In Michigan, the algorithm flagged discrepancies in unemployment filings. At human resource departments, algorithms evaluate résumés to weed out unqualified applicants. In the retail industry, they enable online stores to offer personalized recommendations based on a consumer's shopping history. Algorithms also are the tool that helps make AI possible: Computers use algorithms to process data on their own, in a way that can mimic human decision-making.

AI enthusiasts say the technology is showing tremendous promise. By 2035, AI-powered technologies will increase labor productivity by 40 percent, according to business consulting company Accenture, and could double the U.S. economy's rate of growth.5

“AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor,” Microsoft co-founder Bill Gates said recently. “And overwhelmingly, over the last several hundred years, that has been great for society.”6

But critics say the ubiquity of algorithms and the spread of AI are raising numerous ethical questions about fairness and bias, power and accountability. Algorithms, they warn, too often make mistakes similar to what happened in Michigan or perpetuate societal biases on gender and race.

Algorithms are “sorting winners and losers in the standard, old-fashioned way that we've been trying to get over, that we've been trying to transcend — through class, through gender, through race,” warned mathematician Cathy O'Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. 7

Critics also say the list of potential problems is growing, as artificial intelligence advances and machines improve their ability to “learn” on their own — a process known as machine, or deep, learning. Deep learning enables AI to modify underlying algorithms as it finds patterns and gains experience.

Because of their growing prowess, “algorithms are likely to be capable of inflicting unusually grave harm,” wrote former Justice Department lawyer Andrew Tutt last year. “When a machine-learning algorithm is responsible for keeping the power grid operational, assisting in a surgery or driving a car, it can pose an immediate and severe threat to human health and welfare in a way many other products simply do not.”8

A number of experts say algorithms remain error-prone and contain hidden biases.

“Algorithmic bias, like human bias, results in unfairness,” said Joy Buolamwini, founder of the Algorithmic Justice League, an advocacy group focused on eliminating bias in programs. “Algorithms, like viruses, can spread bias on a massive scale.”9

Buolamwini, an African-American, was an undergraduate at the Georgia Institute of Technology when she noticed that facial-recognition programs she was working with would perform accurately with her white friends but could not recognize her face. In subsequent work at MIT's Media Lab, she researched bias in facial-recognition algorithms and found software from Microsoft, IBM and Face++ were more likely to misidentify the gender of black women than white men. For example, the algorithms' error rates identifying the gender of darker-skinned females in a set of 271 photos was 35 percent.10

Similar problems in other settings, critics say, can lead to discriminatory mortgage lending and to racial profiling in law enforcement.

“The algorithms that dominate policymaking — particularly in public services such as law enforcement, welfare and child protection — act less like data sifters and more like gatekeepers, mediating access to public resources, assessing risks and sorting groups of people into ‘deserving’ and ‘undeserving’ and ‘suspicious’ and ‘unsuspicious’ categories,” wrote Virginia Eubanks, a fellow at New America, a Washington public policy think tank.11

AI also may be keeping some people in prison longer than warranted, according to a recent study by researchers at Dartmouth College. A program that courts use to predict the likelihood of recidivism is no more accurate than nonexpert humans, the study found.

“It is troubling that untrained [humans] can perform as well as a computer program used to make life-altering decisions about criminal defendants,” said Hany Farid, a Dartmouth professor of computer science and a research team leader.12

Polls show that AI worries the public, too: 67 percent of respondents were concerned about algorithms making hiring decisions, according to a 2017 Pew Research Center survey, while 73 percent worried that AI will steal jobs from humans.13

The bar graph shows the percentage of U.S. adults worried or not worried about robotic automation.  

Long Description

Most American adults are more worried than optimistic about the effects of artificial intelligence and automation on employment, transportation and other aspects of everyday life. Figures do not necessarily add to 100 percent because some respondents did not answer.

Source: Aaron Smith and Monica Anderson, “Automation in Everyday Life,” Pew Research Center, Oct. 4, 2017,

Data for the graphic are as follows:

Scenario Percentage Who are Very Worried Percentage Who are Somewhat Worried Percentage Who are Not too Worried Percentage Who are Not at all Worried
Future in which robots and computers can perform many human jobs 25% 48% 23% 4%
Development of algorithms that can evaluate and hire job candidates 21% 46% 25% 7%
Development of driverless vehicles 14% 39% 35% 11%
Development of robot caregivers for the elderly 14% 33% 43% 10%

A 2016 White House report said that over the next two decades, AI could threaten between 9 and 47 percent of jobs and increase income inequality between educated and less-educated workers.14

AI's risks derive from two major features, experts say. First, AI programs arrive at decisions based on algorithms that are often inscrutable to humans. Due to this lack of transparency, people may not even know AI was involved when they applied for, say, a mortgage, much less what “reasoning” resulted in that decision.

“The big piece we are missing is the ability to know when a piece of AI is doing something it shouldn't,” says Finale Doshi-Velez, an assistant professor of computer science at Harvard University's John A. Paulson School of Engineering and Applied Sciences. “These algorithms can screw up in ways that even the makers did not intend.”

In addition, machine learning can lead an algorithm to do things its human creators did not envision. “The behavior of a learning AI system depends in part on its post-design experience, and even the most careful designers, programmers and manufacturers will not be able to control or predict what an AI system will experience after it leaves their care,” wrote Matthew U. Scherer, a lawyer in Portland, Ore., who specializes in artificial intelligence-related issues.15

The possibility of machines becoming autonomous raises a host of fears for many in the technology field, as well as legal questions about who would be liable if someone gets hurt because of mistakes made by AI. A number of experts call for tighter regulation of algorithms and AI.

AI's defenders, however, say such fears are overblown and argue that excessive regulation will only stifle a promising technology. Already, the technology is showing surprising capabilities, they say:

  • A program pioneered by Mount Sinai Hospital in New York City uses AI-powered speech analysis to predict psychosis in at-risk patients with 83 percent accuracy, a recent study said.16

  • Between October 2017 and April 2018, a Facebook AI program that scans feeds for signs that users might harm themselves or others has alerted public safety agencies on more than 1,000 occasions, according to a Facebook spokesperson.

  • A team from Purdue Polytechnic Institute, a college at Purdue University, developed an AI tool — the Chat Analysis Triage Tool — to help law enforcement officials spot sex offenders in online chat rooms.17

The late famed physicist Stephen Hawking, while worried AI could become too powerful, said in 2016, “The potential benefits of creating intelligence are huge. We cannot predict what we might achieve when our own minds are amplified by AI.”18

As researchers and policymakers consider the challenges of artificial intelligence and the algorithms that inform them, here are some questions they are asking:

Will algorithms perpetuate discrimination?

Many experts warn that shortcomings in algorithms are inevitable, difficult to detect and in numerous cases discriminatory.

Rachel Goodman, a staff attorney in the American Civil Liberty Union's (ACLU) racial justice program, said, “We are increasingly aware that AI-related issues impact virtually every civil rights and civil liberties issue that the ACLU works on.”19 The ACLU is focusing on three areas where government and the private sector are deploying AI: criminal justice, lending and credit and surveillance.

Some courts are using algorithms and AI to make parole and sentencing decisions. These programs make predictions about the likelihood of a defendant or prisoner committing future crimes, but neither the software nor its creators reveal the factors that go into those predictions.

“The key to our product is the algorithms, and they're proprietary,” an executive of Northpointe, which provides software to the courts, told a reporter. “We've created them, and we don't release them, because it's certainly a core piece of our business.”20

The bar graph show occupations most at risk from automation within the next decade.  

Long Description

Cooks, servers and other restaurant workers are most at risk of being replaced by automation, according to CB Insights, which studies machine intelligence trends. The organization estimates that nearly 11 million service and warehouse jobs in the United States are at high risk of replacement by machines within the next decade.

Source: “AI Will Put 10 Million Jobs At High Risk — More Than Were Eliminated By The Great Recession,” CB Insights, Oct. 6, 2017,

Data for the graphic are as follows:

Occupation Number of Workers at Risk, in Millions Level of Risk
Cooks and servers 4.3 High
Cleaners 3.8 High
Movers and warehouse workers 2.4 High
Retail salespersons 4.6 Medium
Truck drivers 1.8 Low
Construction laborers 1.2 Low
Nurses and health aides 6.9 Low

“Governments are really being pushed to do more with less money and AI tools are, at least on a surface level, appealing ways to do that and make decisions efficiently,” Goodman said. “We want to see if there are appropriate roles [for AI] and to ensure tools are fair and free of racial biases. Those are hard questions and hard math problems.”21

Due to this lack of transparency, detecting shortcomings in an algorithm can be difficult, experts say. Houston teachers, for instance, complained that the use of an algorithm to assess their performances based on students' test scores violated the teachers' civil rights. The software company that designed the system, the SAS Institute, refused to reveal the workings of the algorithms powering its Educational Value-Added Assessment System, saying they were trade secrets. U.S. Magistrate Judge Stephen Smith ruled the teachers have the right to sue over the use of algorithms. “Algorithms are human creations, and subject to error like any other human endeavor,” he wrote in his 2017 opinion.22

As the analytic capabilities of AI increase, some experts say the potential for abuse grows as well.

For example, “digital phenotyping” assesses individuals' health through their interactions with their devices, such as social media posts and internet searches.

“Our interactions with the digital world could actually unlock secrets of disease,” said Dr. Sachin H. Jain, chief executive of CareMore Health, a company that uses software to analyze Twitter posts for indications of sleep problems.23

But other experts question digital phenotyping's usefulness. While an individual who suddenly stops text messaging to friends might be depressed, it also might mean “that somebody's just going on a camping trip and has changed their normal behavior,” said Dr. Steve Steinhubl, director of digital medicine at the Scripps Translational Science Institute in San Diego, a medical reform group. Digital phenotyping, he said, presents “new potential for snake oil.”24

The latest generation of AI, a number of analysts say, is bringing unprecedented capabilities for businesses and others to monitor and analyze people. Insurance companies can use AI to mine consumer and demographic data to determine which consumers are unlikely to shop around for lower prices. They then charge those consumers more, according to mathematician O'Neil.25

But insurance companies say algorithms give them the ability to customize insurance options and to provide faster and better service. Algorithms also help insurers “better understand the data so we could make predictions about what's happening in the insurance marketplace,” said Pavan Divakarla, data and analytics business leader at the Progressive Casualty Insurance Co.26

A team from Stanford University recently developed a neural network — a program designed to process data in a manner similar to how the human brain works — that uses images to detect individuals' sexual orientation. With a single image, the program could correctly distinguish between gay and heterosexual men 81 percent of the time. Humans got it right only 61 percent of the time. If the program used five images, its accuracy rate increased to 91 percent.

“Given that companies and governments are increasingly using computer vision algorithms to detect people's intimate traits, our findings expose a threat to the privacy and safety of gay men and women,” the study authors wrote.27

While aware of the potential for algorithmic abuse — intentional or otherwise — AI's defenders say the technology can also protect against abuse.

“There is a hopeful note in this,” says Martin Ford, a software developer and futurist. “To the extent that human beings are biased, fixing that is really hard. But fixing it in an algorithm is possible. If it does get fixed, then a world in which algorithms have more control is actually less biased.”

In fact, in most if not all cases, the bias that results from an AI application originates in the data it is fed. “Bias never comes from the way the algorithm is written,” says Jack Clark, director of strategy and communications at OpenAI, an industry-funded nonprofit focused on developing safe and beneficial AI. “The algorithm will reflect that which is in the data.”

Ford and Clark agree that programmers can design an algorithm to detect bias, a process some researchers call “input-output” analysis. The program feeds a screened set of data into the AI application and then analyzes the output, detecting patterns of bias without actually having to examine the inner workings of the application. If the algorithm's output demonstrates a pattern of discrimination — say, a lender rejecting minorities for loans — it can be taken out of service regardless of its reasoning.

And some experts argue that, at least for current levels of AI, existing laws in principle offer protection against most cases of discriminatory or intrusive AI applications.

“The good news is that we do have laws that apply in some of these situations,” says Goodman. “The civil rights laws, the Fair Housing Act, the Equal Credit Opportunity Act and other laws continue to regulate those areas even when those transactions are taking place mediated by artificial intelligence.”

An AI-driven screening tool used in employment applications, for example, can be challenged in court “if that tool is having a disparate impact on women or people of color,” she says.

Should government regulate algorithms and AI?

Algorithms' opacity has led some analysts to argue that they need to be tested for problems, including bias, before they are used.

Victims of a poorly designed algorithm cannot challenge the results if they are unaware of the role played by the algorithm. “A lot of this is hidden in a way that sometimes an applicant or user doesn't even know,” the ACLU's Goodman says. “That's why we're going to need more aggressive regulation.”

Even some executives in technology companies are urging regulation not only of algorithms but more broadly of AI. Calling AI an “existential threat,” Elon Musk, a founder of SpaceX and Tesla, told a 2017 meeting of governors that government must intervene, and fast, because of rapid technological advances: “Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal,” he said. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late.”28

Other experts, primarily within private-sector companies, warn against regulation, saying it will stifle innovation.

“We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI,” said the Information Technology Industry Council, an industry lobbying group based in Washington. “As applications of AI technologies vary widely, overregulating can inadvertently reduce the number of technologies created and offered in the marketplace, particularly by startups and smaller businesses.”29 The council declined several requests for an interview.

“Governments should focus their attention now on enabling the development of AI,” Amir Khosrowshahi, chief technology officer for Intel Corp., told a congressional subcommittee in February. “We are in the early days of an innovation of a technology that can do tremendous good. Governments should make certain to encourage this innovation, and they should be wary of regulation that will stifle its growth.”30

Robots weld car-body parts at the BMW assembly plant in Greer, S.C., on May 10, 2018 (Getty Images/Bloomberg/Luke Sharrett)  
Robots weld car-body parts at the BMW assembly plant in Greer, S.C., on May 10, 2018. Sales of industrial robots have soared in recent years, and as AI-driven automation spreads, some experts estimate it could replace millions of restaurant, cleaning and warehouse jobs. (Getty Images/Bloomberg/Luke Sharrett)

Sara Jordan, an assistant professor in the School of Public and International Affairs at Virginia Tech who studies the ethics of AI, says, however, that regulation would not stifle AI's development.

“It's a normal argument trotted out by those who want the opportunity to build without consequences, then deal with it later,” Jordan says. She points to similarly complex technologies that were developed successfully despite federal regulation. “We dealt with recombinant DNA, which had the same hype around it back in the 1980s and 1990s, and the same with genome sequencing,” she says. “We have done this. We know how to do it.”

Still some representatives of AI-related industries argue that regulation, at least at this point, is unnecessary because companies are aware of the technology's risks and will take appropriate steps to guard against them.

“The companies I have talked with know that it's in their interest to ensure that their algorithms are thoroughly tested,” says Michael Hayes, a senior manager for government affairs at the Consumer Technology Association, a trade organization in Arlington, Va. Hayes says companies are very aware that “the success of AI products depends on public trust.”

In any case, sufficient laws and regulations are already in place, according to Adam Thierer, senior research fellow at the Technology Policy Program at George Mason University's Mercatus Center. Algorithms and other “AI applications already are regulated by a host of existing legal policies,” he said. “If someone does something stupid or dangerous with AI systems, the Federal Trade Commission has the power to address unfair and deceptive practices. State attorneys general and consumer-protection agencies also routinely address unfair practices and advance their own privacy and data-security policies.”31

But other experts say existing regulations are far from sufficient, especially in light of the technology's lack of transparency.

“The rise of AI has so far occurred in a regulatory vacuum,” wrote lawyer Scherer. “With the exception of a few states' legislation regarding autonomous vehicles and drones, very few laws or regulations exist that specifically address the unique challenges raised by AI, and virtually no courts appear to have developed standards specifically addressing who should be held legally responsible if an AI causes harm.”32

Former Justice Department lawyer Tutt, who follows artificial intelligence issues closely, has called for creating “an FDA for algorithms.”

“As algorithms get more advanced and more complex, they actually are all going to tend to converge technologically,” he says. “You're going to see solutions in governing autonomous vehicles that could just as easily apply in the privacy sphere. So having those pockets of expertise at different agencies might not make sense.”

Tutt envisions the new agency as a place where the government can concentrate its expertise on algorithms and AI across all sectors. “It is going to have to be an agency that is a resource for existing regulators because they are going to understand their specific problem sets,” he says.

Thierer says that when additional regulation is truly needed, the government should rely on existing agencies rather than creating a new one devoted to algorithms and AI.

“We should exhaust whatever solutions are already on the books before we look to new ones that may stifle new forms of life-enriching innovation,” he says. “We don't need to rush to regulate before we exhaust all the possibilities for determining whether or not there is an actual problem or harm to be addressed preemptively at all.”

Will AI be good for the economy?

At a time when the unemployment rate is below 4 percent, more and more companies are turning to AI-driven automation to overcome a labor shortage and increase productivity.

Large corporations invested $18 billion to $27 billion in AI-related technologies in 2016, according to the global management company McKinsey, and technology companies are racing to develop AI applications in a wide range of industries, from retailing to marketing and human resources. Globally, more than 550 startups with AI as a core part of their products raised $5 billion in funding from private investors and others in 2016 — more than eight times the amount startups raised in 2012, according to CB Insights, which studies machine intelligence trends.33

Small businesses and manufacturers see AI-powered automation as a way to remain competitive with foreign competitors who have a deep pool of cheap labor to draw on. “If we don't get things automated and we don't start moving things forward, we're going to be the ones who get left behind,” said David Maletto, who runs a small packaging company in Eau Claire, Wis.34

But AI is no panacea for workers, says OpenAI's Clark.

“Why is wage growth unbelievably low even though we have a really tight labor market right now?” Clark asks. “The economy may display symptoms of being fine, but we know that there are big epochal changes happening under the hood. We just don't know how to respond to it.”

Software developer Ford agrees. “The first impacts [of AI] are showing up in terms of stagnant wages rather than outright unemployment,” he says.

Some experts expect AI-driven automation to cost more jobs as it moves from factory floors to the broader economy. By some estimates, AI will be able to replace sales people within 20 years, be able to write a best-selling book within 31 years and be capable of replacing surgeons within 35 years.35

A landmark 2013 study estimated that 47 percent of jobs are at high risk for automation over the next decade or two, although the authors stressed that they were estimating how many jobs could be replaced by AI and not how many jobs will be replaced by AI.36

The bar graph shows the amount of global financing raised for artificial intelligence startups from 2012 through 2016.  

Long Description

Startup companies developing artificial intelligence raised a record $5 billion in funding from private investors and other sources in 2016 — more than an eightfold increase from 2012, according to CB Insights, which analyzes data on technology trends.

Source: “The 2016 AI Recap: Startups See Record High In Deals And Funding,” CB Insights, Jan. 19, 2017,

Data for the graphic are as follows:

Year Amount of Funding Raised, in $ Dollars
2012 $589 million
2013 $1.039 billion
2014 $2.677 billion
2015 $3.125 billion
2016 $5.021 billion

Other researchers project lower rates of job losses from AI. A 2016 study of AI's impact on the 35 member countries of the Paris-based Organisation for Economic Co-operation and Development (OECD), whose constituent economies are similar to that of the United States, found only 9 percent of jobs to be at high risk.37

A number of experts say AI will actually generate employment.

“Automation does not simply destroy jobs, it creates them,” Charles Isbell, professor in the School of Interactive Computing at the Georgia Institute of Technology, told Congress last February. “In this particular case, it creates jobs that require technological sophistication and understanding,” such as computer scientists and programmers.38

Gary Shapiro, president and CEO of the Consumer Technology Association, an industry association in Arlington, Va., said, “AI is predicted to create millions of new jobs unheard of today.” To stay employed, he said, many workers will need to develop new skills. “While the full impact of AI on jobs is not yet fully known, in terms of both jobs created and displaced, an ability to adapt to rapid technological change is critical,” he told Congress. “People entering the workforce in nearly all sectors of our economy will need to have skill sets necessary to work alongside technology and adapt to the new job opportunities that it will bring.”39

Those who minimize AI's impact on employment point to another factor: Machines lack the human touch and do not interact well with people, thus limiting the types of jobs they can do. For example, a digital financial adviser cannot provide the personalized service that human wealth managers do. “A robot has no consciousness, no ethics,” said Vasant Dhar, a professor of information systems at New York University.40

The worst-case scenario — the loss of virtually all human jobs — is “highly unlikely,” according to participants at a 2016 conference at Stanford University called “Artificial Intelligence and Life in 2030.” In the short term, education, retraining and inventing new goods and services can mitigate the impact of job losses, the conference attendees agreed.41

But the conference report said job losses could be great enough that “the current social safety net may need to evolve into better social services for everyone, such as health care and education, or a guaranteed basic income.”42

The 2016 White House report said AI's effects on workers will likely be unevenly distributed throughout the economy.

“Research consistently finds that the jobs that are threatened by automation are highly concentrated among lower-paid, lower-skilled and less-educated workers,” the analysts said. “This means that automation will continue to put downward pressure on demand for this group, putting downward pressure on wages and upward pressure on inequality.”43

Clark says AI's effects on employment will be “extremely severe.”

“Anyone who tells you that it's going to be fine is lying to you,” Clark says. In addition to outright job losses, he says, AI will cause other problems for many workers.

“We are going to automate increasingly large chunks of jobs, which will mean that the part of the job a human does is more sort of boxed in, allowing for less and less independent thought and action on the part of the human,” he says. “That will compress wages since there will be less stuff you will be able to show that you're doing, and it's going to make it harder to switch shops and change careers.”

Go to top


Algorithms' Beginnings

Ancient peoples used algorithms to keep track of their grain stocks and cattle, and Greek mathematicians in the days of Euclid and Archimedes began devising complex algorithmic formulas. In the ninth century, a Persian astronomer and mathematician named Abu Abdullah Muhammad ibn Musa Al-Khwarizmi came up with a name for this mathematical wizardry; in Latin, the word was algorismus.44

Advances in math, including the invention of binary algebra in 1847, laid the groundwork for the development of computing logic and more-sophisticated algorithms. By 1950, computers and algorithms were far enough along that British mathematician Alan Turing began wondering when “machines will eventually compete with men in all purely intellectual fields.” That year, Turing first publicly posed the question: “Can machines think?” Historians date the field of artificial intelligence to that question.45

Turing himself was a pioneer in the computer field. In 1936, he invented the Turing machine, a device that employed instructions to manipulate symbols on a strip of tape. The Turing machine, which was never built, provided a blueprint for the development of the first electronic digital computers.46

That feat was achieved in 1939 when an Iowa State College physics professor, John Atanasoff, and his assistant, Clifford Berry, built the first electronic digital binary computer, which performed its operations using a binary numeral system.47 The device, however, was not programmable. Konrad Zuse, a German engineer, succeeded in building a programmable computer in 1941, although his work went mostly unnoticed in the United States because of World War II.48

The first general purpose electronic computer — and the first computer to attract broad public attention — was ENIAC (Electronic Numerical Integrator and Computer), a room-sized device consisting of nearly 18,000 vacuum tubes. Funded by the military and completed in 1946, ENIAC was put to work on calculations for the development of the hydrogen bomb.49

In 1951, Marvin Minsky, then at Princeton University, and fellow graduate student Dean Edmonds built the first simple neural network machine — a machine capable of learning — that simulated a rat finding its way through a maze.50

A 1956 conference at Dartmouth College gave a further boost to artificial intelligence with the primary attendees — computer scientists Minsky, John McCarthy, Allen Newell and Herbert Simon — becoming the field's research leaders for several decades.

The conference proposed a two-month study of the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The conference highlighted two issues for study: “automatic computers” (machines that can carry out a special set of operations without human intervention) and ways to teach computers to use language, form abstractions and become creative.51

In 1966, MIT computer scientist Joseph Weizenbaum published a report on his creation of ELIZA, the earliest program that used natural language processing to interact with humans. He designed Eliza to conduct a psychotherapy session with humans via keyboard-entered text.52

Interestingly, Weizenbaum's experience with Eliza — specifically, with the willingness of humans to take the program seriously — caused him to become a critic of computers and artificial intelligence. “The dependence on computers is merely the most recent — and the most extreme — example of how man relies on technology in order to escape the burden of acting as an independent agent,” Weizenbaum told a reporter in 1985. “It helps him avoid the task of giving meaning to his life, of deciding and pursuing what is truly valuable.”53

In 1969, researchers at the Stanford Research Institute (now SRI International) first integrated artificial intelligence and robotics in the form of “Shakey.” To maneuver, the robot used a TV camera, a laser range finder and bump sensors to collect data, which was processed by an onboard program called STRIPS.54

When recession struck in the early 1970s, funding for AI research — including critical support from the Department of Defense — largely disappeared, and scientists lamented the arrival of an “AI winter,” with the field suffering from a lack of financial support.

Beyond the Theoretical

The Japanese helped end the AI winter, according to Nick Bostrom, director of the Oxford Martin Programme on the Impacts of Future Technology at Oxford University.

“A new springtime arrived in the early 1980s,” he wrote, “when Japan launched its Fifth-Generation Computer Systems Project, a well-funded public-private partnership that aimed to leapfrog the state of the art by developing a massively parallel computing architecture that would serve as a platform for artificial intelligence.”55

AI research also received a boost in the United States in the early 1980s with the development of “expert systems,” problem-solving software designed to simulate the analytical capabilities of experts in a variety of fields, including accounting, finance and medicine.56

A surge in funding from the Pentagon's Defense Advanced Research Projects Agency (DARPA) helped expand the field. Intent on matching and surpassing Japan's Fifth-Generation Project, DARPA undertook a Strategic Computing Initiative in 1983. For the next decade, DARPA focused on, among other things, chip design and AI software.57

Computing power, meanwhile, was growing rapidly and software was steadily improving. The field of artificial intelligence started booming in the late 1990s, especially in the areas of data mining, language and logistics.

Another “magic moment” occurred in 1995, according to Peter Singer, a senior fellow at New America, when unmanned aircraft — which the United States and Germany had invented during World War II — began using GPS data from satellites, thus greatly improving drones' navigational skills.58 That year the U.S. military also introduced two advanced unmanned aerial vehicles, the attack Predator drone and the surveillance Global Hawk drone.

Challenging Humans

On May 11, 1997, newspapers trumpeted the surprising news that a computer had bested the world chess champion in a six-game match. “In brisk and brutal fashion,” reported The New York Times, “the IBM computer Deep Blue unseated humanity, at least temporarily, as the finest chess-playing entity on the planet yesterday, when Garry Kasparov, the world chess champion, resigned the sixth and final game of the match after just 19 moves, saying, ‘I lost my fighting spirit.’”59

Over the next decade and a half, the most visible progress in artificial intelligence was in robotics.

In 1999, Sony introduced AIBO (Artificial Intelligence Bot), a canine robot that changes its behavior in response to cues from its owners and surroundings. Sony did not intend to sell AIBO to the public when it began research in 1993. But the company soon recognized AIBO's commercial potential and put 5,000 on sale in Japan and the United States in 1999, with 3,000 robots selling in Japan in the first 20 minutes and 2,000 selling in four days in the United States.60

In 2002, iRobot introduced the Roomba, a robot that autonomously maneuvers through rooms while vacuuming and then returns to its charging station when its batteries need charging. The robot adjusts to a variety of surfaces, including wood, carpet, tile and linoleum. Touch-sensitive and infrared sensors prevent it from getting stuck under furniture or harming pets or small children.61

“Jeopardy” champions Ken Jennings, left, and Brad Rutter look on as IBM supercomputer Watson answers a question (AP Photo/Seth Wenig)  
“Jeopardy” champions Ken Jennings, left, and Brad Rutter look on as IBM supercomputer Watson answers a question during a practice round of the popular game show on Jan. 13, 2011. Watson, the size of 10 refrigerators, soundly defeated the two champions in an impressive display of AI's prowess. (AP Photo/Seth Wenig)

In January 2004, NASA landed two autonomous rovers, Spirit and Opportunity, on opposite sides of Mars. “With far greater mobility than the 1997 Mars Pathfinder rover, these robotic explorers have trekked for miles across the Martian surface, conducting field geology and making atmospheric observations,” according to NASA's website.62

That same year, to encourage the development of autonomous vehicles, DARPA began its Grand Challenge in which 15 self-driving cars attempted to traverse 142 miles of the Mojave Desert in California and Nevada. None made it.63

As the technology advanced, DARPA held the Urban Challenge in 2007, requiring contestants “to build an autonomous vehicle capable of driving in traffic, performing complex maneuvers such as merging, passing, parking and negotiating intersections.”64

In 2011, the public became aware of dramatic improvements in machine intelligence when, on Feb. 16, IBM's Watson supercomputer thrashed the top two all-time champions in the final edition of a special three-episode trivia competition on the TV show “Jeopardy.” In fact, Watson earned three times as much money as its human competitors.

Watson's performance was especially impressive because — as any fan of “Jeopardy” knows — the show's questions involve double entendres and other tricks in phrasing.65

Algorithms and AI Spread

In the mid-2010s, artificial intelligence and the algorithms that underlay it began to show up in public and in workplaces.

Although Google had been testing its self-driving cars at private locations in California since 2009, the internet giant did not begin road tests on public streets until 2015 when its cars ventured onto Austin, Texas, streets without human drivers.66

Lionbridge, a British company, achieved a major step forward in computers' ability to process human speech with its introduction in April 2011 of GeoFluent, a cloud-based service that instantly translates speech or written text for users, and that can translate text into multiple languages for workgroups that do not share a common language.67

Siri, Apple's speech recognition and language processing software, was introduced in 2011, initiating a flood of AI-powered, voice-interactive consumer products. Google's Google Now debuted in 2012 and Microsoft's Cortana in 2013. Amazon's Alexa arrived a year later.68

AI also made striking inroads into white-collar workplaces in 2012 when WorkFusion, a platform developed at MIT's Computer Science and Artificial Intelligence Lab, went on the market. WorkFusion is essentially an automated project manager that selectively assigns work to humans — even posting the assignments on social media, such as Craigslist — and then evaluates the research and writing produced by the human, and reassigns it if it is not up to standards.

“As the workers complete their assigned tasks, WorkFusion's machine-learning algorithms continuously look for opportunities to further automate the process,” wrote software developer and futurist Ford. “In other words, even as the freelancers work under the direction of the system, they are simultaneously generating the training data that will gradually lead to their replacement with full automation.”69

As robots' motor skills improve, more prosaic jobs are increasingly being turned over to machines.

Momentum Machines, for example, in 2014 introduced a robot that can grill a hamburger, place it on a bun, layer it with lettuce, tomatoes, pickles and onions and wrap it in paper. The company says the machine can take the place of two to three full-time workers, turning out 360 burgers per hour. Our “device isn't meant to make employees more efficient,” said co-founder Alexandros Vardakostas. “It's meant to completely obviate them.”70

Sales of industrial robots more than doubled between 1995 and 2013, according to the International Federation of Robots, an industry trade group, with more than 178,000 sold in 2013.71 In 2014, sales climbed 27 percent — to about 225,000 units — with automotive and electronics industries, especially those in China and South Korea, accounting for the lion's share of the increase.72

The Robotic Industries Association, another industry trade group, reported that North American companies ordered 27,685 robots valued at $1.6 billion in 2014, an increase of 28 percent in units and 19 percent in value over the previous year.73

Also in 2014, Facebook announced that its AI-powered DeepFace facial-recognition program had achieved an accuracy rate of 97.25 percent; humans by comparison can recognize faces 97.5 percent of the time.74

And in July 2014, Google released details about Sibyl, the company's machine-learning platform that tracks massive amounts of data to allow Google to make predictions about user behavior. Sibyl, for example, enables YouTube to guess which videos a website visitor would want to see.75

As AI and their algorithms grow increasingly powerful and inscrutable, researchers want the machines to be able to explain their thinking so humans will know whether to trust AI's findings. DARPA in March 2017 announced that it had chosen 13 projects from academia and industry to participate in its new Explainable Artificial Intelligence program, which seeks to pull back the curtain on AI's decision-making.

“It's often the nature of these machine-learning systems that they produce a lot of false alarms,” said David Gunning, the program's manager at DARPA. “So an intel analyst really needs extra help to understand why a recommendation was made.”76

Go to top

Current Situation

Russian Hacking

Russian hackers and others use AI to manipulate the views of Americans by mining data and disseminating fake news on social media sites, according to computer experts.

“Where there were once a couple dozen human operators stitching together a few divisive messages during working hours in Moscow to pick at the digital halls of our democracy,” wrote Dipayan Ghosh, a fellow at New America and a former technology adviser to the Obama White House, “there will soon be countless AI systems testing and probing a plethora of content on a vast field of social media user audiences that are highly segmented by race, ethnicity, gender, location, socioeconomic class and political leaning.”77

To counter AI-generated misinformation campaigns, Ghosh urges improving algorithms.

“Given the scale of platforms like Google and Facebook — with billions of people using the platforms — you cannot have just humans checking the veracity of certain content,” he told CQ Researcher. “You need AI to be checking content and essentially checking its veracity continuously and flagging possible fake information for human review.”

Although current technology is not up to the task, he says, “as the AI gets better and better and as these companies hire more and more people, I think they have a good chance of [making] sure this does not become a broader problem going forward. It will be a big challenge.”

The controversy over Cambridge Analytica, a consulting firm that worked with the Trump campaign in the 2016 presidential campaign, shows how big that challenge could be. University of Cambridge psychology professor Aleksandr Kogan developed an app that gathered data on up to 87 million Facebook users. The data were then analyzed for user personality traits, and that analysis was sold to Cambridge Analytica.78

The company, in turn, used the data to identify potential Trump supporters and tailor pitches to them, a strategy known in politics as microtargeting. Kogan said the software he used to generate psychological profiles was ineffective. “In fact, from our subsequent research on the topic,” he wrote, “we found out that the predictions we gave [Cambridge Analytica] were 6 times more likely to get all 5 of a person's personality traits wrong as it was to get them all correct. In short, even if the data was used by a campaign for microtargeting, it could realistically only hurt their efforts.”79

Facebook Chairman and CEO Mark Zuckerberg arrives to testify before the House Energy and Commerce Committee in Washington on April 11, 2018 (Getty Images/Chip Somodevilla)  
Facebook Chairman and CEO Mark Zuckerberg arrives to testify before the House Energy and Commerce Committee in Washington on April 11, 2018. Zuckerberg responded to reports that Cambridge Analytica, a British political consulting firm linked to the Trump campaign, used Facebook data during the 2016 presidential campaign to identify potential Trump supporters. (Getty Images/Chip Somodevilla)

Still, the situation sufficiently alarmed Congress that Facebook founder and CEO Mark Zuckerberg was called in to testify before Congress in early April.

Zuckerberg told senators that Facebook would be “investigating many apps, tens of thousands of apps, and if we find any suspicious activity, we're going to conduct a full audit of those apps to understand how they're using their data and if they're doing anything improper. If we find that they're doing anything improper, we'll ban them from Facebook, and we will tell everyone affected.”80

Zuckerberg also said his company had recently deployed AI tools that could detect malicious activity aimed at influencing elections.

Sen. Richard Blumenthal, D-Conn., said Facebook could not regulate itself and that Congress must act. “The old saying: ‘There ought to be a law.’ There has to be a law,” he said. “Unless there's a law, their business model is going to continue to maximize profit over privacy.”

Sen. John Thune, R-S.D., echoed Blumenthal when he said that “in the past, many of my colleagues on both sides of the aisle have been willing to defer to tech companies' efforts to regulate themselves. But this may be changing.”81

Congress Stirs

In addition to the Senate hearing on the misuse of Facebook data, the House Subcommittee on Information Technology held a hearing in February on artificial intelligence, looking at ways public policy can help the United States remain a world leader in the technology.

Intel's Khosrowshahi told the panel that the federal government “can play an important role in enabling the further development of AI technology. Since data is fuel for AI, the U.S. government should embrace open data policies.” He also urged the government to increase funding for scientific research and for programs to teach workers the skills needed to develop AI.82

Two bills under consideration also will have a bearing on the safety of artificial intelligence.

The Self Drive Act, which passed the House last year and is now being considered by the Senate Committee on Commerce, Science and Transportation, encourages the testing and deployment of autonomous vehicles; requires developers to certify that the technology is safe; and requires manufacturers to develop written cybersecurity and privacy plans for self-driving vehicles.

The bill also pre-empts states from enacting their own laws regarding the design, construction or performance of autonomous vehicles, a controversial provision.

In a letter to Congress, the National Governors Association complained that the bill encroaches on safety regulations that remain under the states' purview. The governors also expressed concern about a lack of state representation on the councils and advisory groups proposed in the legislation.

“Especially with respect to the cybersecurity advisory council, the sharing of threat information with state government will be a critical component of preventing and mitigating security threats in autonomous vehicles,” the association wrote.83

The other major legislation — the Future of Artificial Intelligence Act of 2017 — is a bipartisan bill that would create a committee of experts from within and outside government to advise the secretary of Commerce on artificial intelligence, its economic effects and the legal and ethical issues, such as algorithmic bias.84

The bill has not cleared committee and its future is uncertain, although Rep. Ted Lieu, D-Calif., one of the bill's co-sponsors, says he hopes the legislation's bipartisan sponsorship will help it gain passage.

“Nothing is going to have a greater impact on American competitiveness and innovation than developments in machine learning and AI,” says Lieu. “As members of Congress, we need to make sure these powerful tools are deployed safely and responsibly — and that the right people are talking to each other about how to prepare for what's coming. My bill will help bring together policymakers and industry experts to sketch out roles and relationships that will become crucial as this technology proliferates.”

The Trump administration is calling for light and perhaps even reduced regulation of AI. The day after a May 10 meeting with industry representative, a White House statement said, “Overly burdensome regulations do not stop innovation — they just move it overseas.”85

Pressure to Act

Despite the reluctance of Congress and the White House to push for comprehensive regulation of algorithms and AI, pressure from outside the federal government is increasing.

Europe's new privacy law — the General Data Protection Regulation, or GDPR — which went into effect in May, requires U.S. companies doing business in Europe to provide to EU citizens “meaningful information about the logic” of automated decision-making processes. “In many cases, global companies that do business in Europe will need to disclose what factors go into the algorithms they use,” said Mark MacCarthy, senior vice president of public policy for the Software and Information Industry Association, a trade industry group in Washington.86

Similarly, the New York City Council unanimously passed legislation in December to establish a task force to examine the city's automated decision systems — which are used to allocate city resources on everything from firehouses to food stamps — to reduce bias and make them more open to examination.87

“I don't know what [an algorithm] is. I don't know how it works. I don't know what factors go into it,” said Bronx City Council Rep. James Vacca, sponsor of the legislation. “As we advance into the 21st century, we must ensure our government is not ‘black-boxed.’ I have proposed this legislation not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public.”88

Even as other jurisdictions try to fill in the regulatory gaps, critics say the federal government still needs to act.

“The industry has done an incredibly good job of making sure that it stays in an environment that is essentially unregulated,” says Ghosh. “What we need is to defeat political gridlock to offer more transparency into how these technologies work, because I don't see any other way of holding industry accountable.”

Lieu agrees, saying greater congressional oversight is necessary. “I think there's a happy medium between overbearing regulation that stifles innovation and a completely hands-off approach to such a powerful set of tools,” he says. “We've tried that, and it hasn't worked.”

Go to top


Outperforming Humans

As algorithms improve, so will machine learning and AI. Futurist and inventor Ray Kurzweil famously predicts that by 2029, an AI program will pass the Turing test, devised to measure whether a computer has reached human levels of intelligence. Kurzweil, who since 2012 has been Google's engineering director, also predicts that by 2045 AI will attain “the singularity” — the point at which artificial intelligence will be so advanced, “we will multiply our effective intelligence a billionfold.”89

While some AI programs already can outperform humans in certain tasks, such as performing rapid mathematical operations, some experts believe Kurzweil's prediction is off by decades or more.

“I think the singularity could happen, but my sense is it could be a lot further off than what Kurzweil is suggesting,” says futurist Ford. Ford also believes Kurzweil is overly optimistic about how soon AI can pass the Turing test. “AI will eventually build a machine that can think at a human level, but I don't think it's going to happen 11 years from now,” he says. “It's a lot further in the future. I would say at least 50 years and maybe 100 years. But these are just wild guesses.”

Last summer, the Future of Humanity Institute at the University of Oxford asked 1,634 experts in algorithms and artificial intelligence when they expected machines to be outperforming humans at a variety of tasks. The institute reported the following predictions by the 352 who responded:

  • By 2024 AI will outperform translators of foreign languages.

  • By 2031 AI will outperform retail salespersons.

  • By 2049 AI will write a best-selling book.

  • By 2053 AI will be capable of working as a surgeon.90

Some experts say AI's very nature points to the likelihood that once the next major advance comes, it will advance at an exponential rate.

“The day that the first self-driving car company goes commercial, I would imagine that one year from that day, there's going to be four other companies with self-driving cars on the road,” says former Justice Department attorney Tutt. What's more, advances in one AI sector are expected to bring rapid advances in other areas, he says.

“When you have software that is able to do something as sophisticated as drive a car, its adaptability to other solution spaces is going to be quite high,” Tutt says.

At the same time, AI's opaque nature — and especially the ability of machine learning to change its own code without human help — means regulating AI will remain difficult.

“It's a problem that is going to require a lot of trial and error,” Tutt says.

According to other experts, however, advocates of comprehensive regulation of AI are responding to baseless fears. “Their thinking has been conditioned by years upon years of popular culture narrative about AI and robotics that are dripping with dystopian dread,” says Thierer of the Mercatus Center. “There are no plots about AI and robotics that end with a good-news story.”

If those fears win out, says Thierer, humans will inevitably forgo a brighter future. “The problem with that thinking is that it means we need to foreclose almost all AI-based innovation,” he says. “Only by allowing a certain amount of experimentation with new technologies can we find new life-enriching or even lifesaving applications.”

Go to top


Does AI require a new federal regulatory structure?


Matthew U. Scherer
Attorney in Employment Law and AI issues, Littler Mendelson. Written for CQ Researcher, July 2018

Artificial intelligence (AI) represents a departure from previous generations of human technology in many ways. In law and policy, the most radical feature of AI is that for the first time in human history, consequential decisions are being made by something other than a human being. From a legal perspective, this is problematic for many reasons, but two in particular stick out.

First, our system of laws assumes that humans will make all legally significant decisions. True, corporations and other legal “persons” are not themselves human beings. But dig down even a little, and it quickly becomes obvious that the law always assumes that humans will retain ultimate control. This assumption is so obvious and fundamental that it rarely is spelled out in legal codes. As a result, as AI systems start making more decisions that carry legal consequences, many of their decisions and acts may fall into a legal gray area.

The second major reason for concern is that people have a worrying tendency to blindly trust machines that are marketed as being designed to perform a particular task. People will often follow the directions provided by a GPS even if they know exactly where they are going, and even if they know that the GPS’ suggested route is incorrect. This means that even decisions that are not supposed to be delegated to machines under the law may nevertheless be delegated to machines in practice — particularly if the machine is marketed as having the necessary capabilities.

These two factors point to the need for legal reform and, where necessary, public oversight of how AI systems are marketed and operated. Unfortunately, traditional centralized forms of government regulation may prove particularly ill-suited for managing the risks associated with AI. Those risks — unlike previous large-scale ones made by humans, such as environmental threats, nuclear technology and mass-produced consumer products — do not require a large physical footprint or a centralized production facility.

Indeed, AI researchers may work together on projects at different times and from different places (or even different countries) without any conscious coordination. That too represents a unique challenge for regulators, who are used to having large, highly visible targets for regulation. AI thus may not require “regulation” in the traditional sense, but it will require fundamentally rethinking how we manage the risks associated with new technologies.


Adam Thierer
Senior Research Fellow, Mercatus Center, George Mason University. Written for CQ Researcher, July 2018

Artificial intelligence (AI) is already all around us and is helping make our lives better. It holds the promise of further helping improve our economy and even saving lives. But some worry about the dangers of autonomous systems and machine learning and wonder whether more regulation is needed.

The question of whether we need a new federal regulatory structure for AI and robotics implies an absence of any law or oversight for them. In reality, these technologies already are governed by a wide variety of policies and agencies.

The Federal Trade Commission, National Highway Traffic Safety Administration, Federal Aviation Administration, National Telecommunications and Information Administration, Department of Homeland Security and the White House itself already have looked into various facets of AI policy and issued reports on it. Moreover, plenty of policies and procedures already govern AI: civil rights law, product defects law, the law of torts, contract law, property law, class-action lawsuits and a wide variety of consumer protection policies aimed at addressing “unfair and deceptive practices.”

Thus, when people ask whether we need a new federal regulatory structure for AI, what they mean to suggest is that we need a new, technocratic approach to regulating autonomous systems, machine learning and robotics. This means a dedicated law (or set of laws) and likely a new federal bureaucracy to preemptively regulate this rapidly evolving set of technologies.

That would be a mistake. While it's always worth thinking about the dangers new technologies might pose and whether new policies are needed, we shouldn't let fears about worst-case scenarios lead us to create another huge federal bureaucracy until we have exhausted other policy solutions.

Top-down, technocratic laws and bureaucracies tend to focus on preemptive remedies that aim to predict the future and on hypothetical problems that may never come about. Although such laws and agencies are well-intentioned, they pose many trade-offs. Heavy-handed preemptive restraints on innovation can discourage new entry into emerging technological fields, increase compliance costs and create more risk and uncertainty for entrepreneurs and investors. For a nation, this can seriously threaten economic growth, competitive advantage and long-run prosperity.

To the maximum extent possible, then, policymakers should work to make “permissionless innovation” the lodestar of AI policy and avoid letting Chicken Little thinking lead to the creation of a new, innovation-limiting federal regulatory regime.

Go to top


1930s–1950sEarly computers lay the groundwork for artificial intelligence.
1936British mathematician Alan Turing submits an article describing the “Turing machine,” a thought experiment that served as the blueprint for the first electronic digital computers.
1946ENIAC, the first general purpose electronic computer, is completed and put to work on calculations for development of the hydrogen bomb.
1950In a seminal article, Turing raises the question of whether machines can think; his question is credited with giving birth to the field of artificial intelligence.
1951Princeton University graduate students Marvin Minsky and Dean Edmonds construct the first simple neural network machine, which simulated a rat finding its way through a maze.
1955John McCarthy, an assistant professor of computer science at Dartmouth, coins the term “artificial intelligence.”
1956Dartmouth College hosts a conference that lays out the primary directions for research on artificial intelligence.
1960s–1970sResearchers seek to advance artificial intelligence.
1967MIT programmer Richard Greenblatt develops MacHack, which performs at the level of a good high school chess player.
1969SRI International demonstrates the first robot, named Shakey, that employs artificial intelligence.
1979The Stanford Cart — the first computer-controlled, autonomous vehicle — successfully circumnavigates the Stanford University AI laboratory.
1980s–1990sAI begins to have commercial applications.
1980First expert systems — programs that emulate human decision-makers — are introduced. A variety of programs appear through the decade for analyzing data.
1985Inventor Danny Hillis designs the Connection Machine, which uses parallel computing — in which several processors execute multiple applications or computations simultaneously — to bring new power to AI.
1997IBM's Deep Blue supercomputer defeats chess champion Garry Kasparov in a match.
2000–PresentAI appears in consumer products.
2002iRobot introduces the Roomba, a robotic vacuum that autonomously maneuvers through rooms.
2009Google begins testing self-driving cars.
2011IBM's Watson supercomputer beats two former “Jeopardy” champions in a three-day televised tournament…. Apple launches speech-recognition and language-processing software called Siri; Google follows in 2012 with Google Now and Microsoft in 2013 with Cortana.
2012WorkFusion, a platform that assigns work to humans and evaluates their work, hits the market.
2014Facebook announces that its AI-powered DeepFace facial recognition program has achieved 97.25 percent accuracy, virtually matching that of humans.
2017The Defense Advanced Research Projects Agency undertakes a program that seeks to track and explain how AI arrives at decisions.
2018Facebook CEO Mark Zuckerberg testifies before Congress about the theft of personal data (April)…. Europe's new privacy law — the General Data Protection Regulation — takes effect (May). It requires that companies doing business in Europe provide European Union citizens with “meaningful information about the logic” of automated decision-making processes…. An IBM computer debates humans in a San Francisco competition; it won on knowledge but the humans had better delivery (June).

Go to top

Short Features

AI will raise “incredibly complicated questions.”

On a March night in Tempe, Ariz., 49-year-old Elaine Herzberg stepped out of the shadows to walk her bicycle across the street. An autonomous Uber vehicle with a backup human driver struck and killed her. Her death, legal analysts say, raises difficult questions as to who should be legally liable for the tragedy: the maker of the software that controlled the car? Uber? The human who was supposed to intervene if trouble arose?

A Hino Motors autonomous bus maneuvers on a test course in Tokyo in May 2018 (Getty Images/Bloomberg/Kiyoshi Ota)  
A Hino Motors autonomous bus maneuvers on a test course in Tokyo in May 2018. The spread of self-driving vehicles is raising questions about who will be liable when accidents occur. (Getty Images/Bloomberg/Kiyoshi Ota)

Experts say the law on artificial intelligence (AI) liability remains unclear because no major cases have come before the courts. And the Tempe case will provide no opportunity for judges to weigh in: Uber quickly paid an undisclosed amount to Herzberg's family, apparently before a lawsuit was filed.1

When the courts do take up the liability question, analysts say, judges will likely have their hands full trying to determine culpability because humans find it difficult to understand how AI operates.

“AI decision-making rules … are completely different from how the human brain processes information,” wrote Finale Doshi-Velez, an assistant professor of computer science at Harvard University, and Mason Kortz, a clinical instructional fellow at the Harvard Law School cyberlaw clinic. “If a self-driving car could spit out a raw record of what it was ‘thinking’ at the time of a crash, it would probably be meaningless to a human.”2

And because multiple programmers in different locations or companies often are responsible for writing complex AI programs, authorities will find it hard to assign blame for a specific action. More challenging still, if an AI program includes machine learning — the ability of algorithms to “learn” using data it has gathered — it may change its behavior in ways unforeseen by its creators, further muddying the legal waters.

“It is not clear to me who should be held responsible,” says lawyer Matthew U. Scherer, who specializes in artificial intelligence-related issues. “The people who designed [AI] to a certain degree have an obligation to build in safeguards to ensure that their products are not misused. But it is difficult to know how feasible that will be as a form of control for machine learning.”

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research group seeking to build advanced AI capabilities, says a company or individual that puts an autonomous vehicle on the market “absolutely” is liable for any damage that vehicle causes, even when machine learning played the key role in the accident. “We are responsible for our AI … even if the car is on cruise control,” he says.

Other experts argue that holding creators of AI directly responsible for damages is unrealistic. “Is it really my fault if I have a product that I deploy into an environment and the environment has a couple of traits that the program processes and that causes the program to do something bad?” says Jack Clark, strategy and communications director at OpenAI, an industry-funded nonprofit focused on developing safe AI. “It's very hard to think of everything that can go wrong and protect against everything that can go wrong.”

While AI presents clear challenges in civil litigation, the situation in criminal cases is even more daunting.

Criminal law in the United States “attaches great importance to the concept of mens rea — the intending mind,” wrote the standing committee of Stanford University's One Hundred Year Study of Artificial Intelligence project. Can an artificial intelligence “intend” to commit a crime? “As AI applications engage in behavior that, were it done by a human, would constitute a crime, courts and other legal actors will have to puzzle through whom to hold accountable and on what theory,” the committee said.3

The uncertainties of AI liability have led some experts to suggest adopting a legal framework similar to the one on vaccines. Because vaccines offer obvious benefits to public health but can cause bad reactions in some individuals, Congress established a federally funded no-fault system to compensate those injured by mandated vaccines. Rebecca J. Krystosek, managing editor of the Minnesota Law Review, supports the strategy but only for medical algorithms and driverless cars, both of which offer what she called “a clear and compelling public safety benefit.” In other cases, she said, “the law should not afford any special ‘out’ for algorithmic harms.”4

Whatever approach is taken, AI is going to raise “incredibly complicated questions,” says lawyer Andrew Tutt, who studies artificial intelligence issues. The courts, he says, were barely able to understand cars when they replaced horse-drawn carriages at the beginning of the 20th century. “Imagine them trying to figure out whether the data [used to teach AI] was really what was responsible for the vehicle that crashed.”

— Patrick Marshall

[1] “Uber appears to have reached settlement with family of woman who died in Arizona accident,” The Associated Press, Los Angeles Times, March 29, 2018,

[2] Finale Doshi-Velez and Mason Kortz, “A.I. is more powerful than ever. How do we hold it accountable?” The Washington Post, March 20, 2018,

[3] “Artificial Intelligence and Life in 2030,” Report of the 2015 Study Panel, Stanford University, September 2016, p. 49,

[4] Rebecca J. Krystosek, “The Algorithm Made Me Do It and Other Bad Excuses,” Minnesota Law Review, May 17, 2017,

Go to top

Some argue AI poses terrible risks, but others aren't worried.

They are fodder for countless dystopian sci-fi books and films: rogue robots that turn against humankind.

Actor Arnold Schwarzenegger attends the premiere (Getty Images/Paramount Pictures International/Chung Sung-Jun)  
Actor Arnold Schwarzenegger attends the premiere of Terminator Genisys on July 2, 2015, in Seoul, South Korea. Experts are debating whether rampaging robots as depicted by Hollywood will be possible someday. (Getty Images/Paramount Pictures International/Chung Sung-Jun)

Some experts say the threat from artificial intelligence is real.

Interviewed in a 2018 documentary about AI's dangers, Tesla founder Elon Musk warned: “We are rapidly headed towards digital super intelligence that far exceeds any human,” adding that these machines have the potential to become “an immortal dictator from which we would never escape.”5

Stuart Armstrong, a fellow at the Future of Humanity Institute, a research group at Oxford University that studies risk, agrees the threat to humanity posed by autonomous machines is great. “Humans steer the future not because we're the strongest or the fastest, but because we're the smartest,” he told a reporter in 2015. “When machines become smarter than humans, we'll be handing them the steering wheel.”6

But others say talk of AI's threat to the human race is way overblown.

“I am not terribly worried,” says Mark MacCarthy, senior vice president of public policy for the Software and Information Industry Association, an industry group.

No research is yet aimed at developing “general AI” — AI that is smart enough to adapt to different environments and activities, he says. “Almost all real computer science research at the university level and [in] research operations of businesses is focused on narrow AI — it's just a tool trying to accomplish a particular purpose.”

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, an AI research group, also says he is not worried — at least not yet. “I believe that we have canaries in the coal mine — that is, identifiable points that, if reached, could lead us to be more alarmed,” he says. “One example is a physical robot capable of replicating itself. We don't have any such robot or machine today.”

Nevertheless, experts who say talk of an AI apocalypse is overblown note that humanity should still develop ways to ensure the safety of AI before it is capable of modifying or replicating itself.

The core problem, says Jack Clark, strategy and communications director at OpenAI, an industry-funded group that conducts research on developing advanced AI, is: “How do you ensure that increasingly autonomous systems that have some self-modification behavior cannot go haywire? It's an extremely unpleasant problem.”

“I think it is fair to say we have barely scratched the surface of the important safety and basic security research that can be done in AI,” Ben Buchanan, a fellow at the Belfer Center's Cybersecurity Project at Harvard University, told Congress in April.7

One strategy for ensuring safe artificial intelligence is to create an abort system that would allow a human to interrupt any algorithm that is misbehaving or becoming threatening.8 At the same time, researchers are aware that AI might eventually develop the ability to take defensive action. AI experts at DeepMind, a London-based AI research firm recently bought by Google, and the University of Oxford are seeking ways to prevent that from happening.9

Bas Steunebrink, a researcher at IDSIA, an AI laboratory in Switzerland, is working on a different strategy — teaching AI to be safe and monitoring it until humans are convinced it poses no danger. With Steunebrink's approach — called EXPAI (experience-based artificial intelligence) — the emphasis shifts from searching for ways to control AI to developing ways to “grow” AI that has human-like ethical values.

Still, some experts are concerned that even if developers can ensure the safety of artificial intelligence, problems can result if humans misuse AI tools.

One such area is nuclear warfare. AI's advances “could spur arms races or increase the likelihood of states escalating to nuclear use” as their military capabilities improve, according to a recent paper by RAND, a California think tank that conducts research under government contracts. For example, improved sensor technologies could tempt nations to launch a nuclear strike because AI increases their chances of destroying an enemy's nuclear missiles aboard submarines or on mobile launchers.10

In a fast-moving crisis, militaries also would be relying on AI to make split-second decisions.

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said Andrew Lohn, co-author of the paper and an associate engineer at RAND. “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”11

— Patrick Marshall

[5] Peter Holley, “Elon Musk's nightmarish warning: AI could become ‘an immortal dictator from which we would never escape,’” The Washington Post, April 6, 2018,

[6] Patrick Sawer, “Threat from Artificial Intelligence not just Hollywood fantasy,” The Telegraph, June 27, 2015,

[7] Testimony of Ben Buchanan, House Oversight Committee Subcommittee on IT, April 18, 2018,

[8] Sam Shead, “Google has developed a ‘big red button’ that can be used to interrupt artificial intelligence and stop it from causing harm,” Business Insider, June 3, 2016,

[9] Laurent Orseau and Stuart Armstrong, “Safely Interruptible Agents,” Association for Uncertainty in Artificial Intelligence, 2016,

[10] Edward Geist and Andrew J. Lohn, “How Might Artificial Intelligence Affect the Risk of Nuclear War?” RAND Corp., 2018, pp. 1, 11,

[11] “By 2040, artificial intelligence could upend nuclear stability,” press release, RAND, April 24, 2018,

Go to top



Agrawal, Ajay, Joshua Gans and Avi Goldfarb , Prediction Machines: The Simple Economics of Artificial Intelligence , Harvard Business Review Press, 2018. Three professors of management and marketing explain what artificial intelligence (AI) will mean for jobs, business and the economy.

Noble, Safiya Umoja , Algorithms of Oppression: How Search Engines Reinforce Racism , New York University Press, 2018. An assistant professor of information studies at the University of California, Los Angeles, argues that algorithms that power search engines promote bias against women and people of color.

O'Neil, Cathy , Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Broadway Books, 2016. A mathematician and data scientist explains how algorithms used by banks, police departments and companies discriminate against groups of people, primarily due to bias in the underlying data that the algorithms analyze.


Scherer, Matthew U. , “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies,” Harvard Journal of Law & Technology, Spring 2016, A lawyer who specializes in artificial intelligence-related issues proposes creating a federal agency to oversee AI.

Townsend, Tess , “The Right and Wrong Way to Regulate Artificial Intelligence,” Inc., May 3, 2016, The founder of Singularity University, a Silicon Valley think tank, does not want to see advances in artificial intelligence stifled by governmental rules, a journalists reports.

Tutt, Andrew , “An FDA for Algorithms,” Administrative Law Review, 2017, As with pharmaceuticals, harms traceable to algorithms may be difficult to detect, argues a lawyer who focuses on AI issues. He calls for establishing a federal agency with powers similar to those of the Food and Drug Administration to regulate AI.

Wang, Yilun, and Michal Kosinski , “Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images,” Journal of Personality and Social Psychology, 2018, Two Stanford University researchers found that neural networks — programs designed to process data in a manner similar to how the human brain works — accurately detected the sexual orientation of males from photographs 81 percent of the time. Such an ability, they say, could raise privacy concerns.

Vogt, Heidi , “Should the Government Regulate Artificial Intelligence?” The Wall Street Journal, April 30, 2018, Three experts in technology and public policy debate the pros and cons of regulating AI.

Reports and Studies

“Artificial Intelligence, Automation, and the Economy,” The White House, Dec. 20, 2016, The Obama administration reported that while AI-driven automation would continue to boost the U.S. economy, workers would need to adapt their skills to automation, and policy changes would be required to help workers deal with structural changes in the economy.

“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” Future of Humanity Institute, Oxford University, February 2018, This report — which was produced with the help of the Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, the Electronic Frontier Foundation and OpenAI — surveys potential security threats from malicious uses of artificial intelligence and proposes ways to better forecast, prevent or mitigate the harm.

Brynjolfsson, Erik, Daniel Rock and Chad Syverson , “Artificial Intelligence and the Modern Productivity Problem: A Clash of Expectations and Statistics,” National Bureau of Economic Research, November 2017, Three management professors explore why artificial intelligence has not yet increased worker productivity as much as many expected, and conclude that it is simply taking longer than expected for the technologies to be implemented.

Osoba, Osande, and William Welser IV , “An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence,” RAND Corp., 2017, Two Pardee RAND Graduate School professors analyze the potential effects of unintended flaws in algorithms in such areas as criminal justice, public works and welfare administration.

Go to top

The Next Step

AI Ethics

Arnold, Andrew , “Can AI Help Us Predict And Prevent Crimes In The Future?” Forbes, May 29, 2018, The potential is high for artificial intelligence to help law enforcement fight crime, but the technology often relies on racial and ethnic profiling, says a Forbes contributor.

Cano, Vincent , “A Spanish tech company wants to programme AI machines with ethics,” Business Insider, June 1, 2018, A technology company seeks to teach morals to machines.

Lomas, Natasha , “Accenture wants to beat unfair AI with a professional toolkit,” Tech Crunch, June 9, 2018, A professional-services firm is developing a tool for algorithm designers that detects bias and highlights the influence of sensitive variables, such as age, gender and race.

Facial Recognition

Harwell, Drew , “Unproven facial-recognition companies target schools, promising an end to shootings,” The Washington Post, June 7, 2018, Security contractors are marketing facial-recognition systems to schools as a way to prevent shootings, despite little transparency about the technology and little proof of its effectiveness.

Levin, Sam , “US government to use facial recognition technology at Mexico border crossing,” The Guardian, June 5, 2018, U.S. Customs and Border Protection will begin using a facial-recognition surveillance system at the Texas-Mexico border in August that will track individuals leaving and entering the United States.

Robitzski, Dan , “This Filter Makes Your Photos Indecipherable to Facial Recognition Software,” Futurism, June 1, 2018, In an effort to protect people's privacy, engineers from the University of Toronto have created a camera filter that distorts photo pixels in ways imperceptible to humans but that renders faces unrecognizable to facial-recognition systems.

Health Care Industry

Coleman, Lauren deLisa , “Inside Trends And Forecast For The $3.9T AI Industry,” Forbes, May 31, 2018, Health care will be one of the biggest beneficiaries of the AI revolution, according to a former IBM executive, who predicts robots will increasingly help care for the elderly.

Kite-Powell, Jennifer , “See How This Hospital Uses Artificial Intelligence To Find Kidney Disease,” Forbes, June 8, 2018, A partnership between Mount Sinai Hospital and a health care startup aims to harness AI's potential to improve disease detection.

Shieber, Jonathan , “Bessemer launches a seed fund for startups applying machine learning to health,” Tech Crunch, June 1, 2018, The oldest venture capital firm in the United States has begun a $10 million investment program focused on startups researching machine learning in health care.

Machine Learning

“Machine learning predicts World Cup winner,” MIT Technology Review, June 12, 2018, Combining machine learning, conventional statistics and a lot of clever thinking, researchers predicted Germany would win the World Cup — if it had made the quarterfinals.

Anthony, Aubra , “Navigating the risks of artificial intelligence and machine learning in low-income countries,” Tech Crunch, May 24, 2018, Startups and nongovernmental organizations using AI to improve international aid should be aware of the technology's limitations, says a researcher with the U.S. Agency for International Development.

Vincent, James , “Machine learning is helping computers spot arguments online before they happen,” The Verge, May 23, 2018, New software uses machine learning to detect potentially hostile online interactions, and someday could help prevent conflicts on digital media platforms.

Go to top


American Civil Liberties Union
125 Broad St., New York, NY 10004
Civil rights group that is studying algorithms' impact on criminal justice, surveillance and credit and lending.

Association for the Advancement of Artificial Intelligence
445 Burgess Drive, Suite 100, Menlo Park, CA 94025
Organization that focuses on advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.

Carnegie Mellon University Robotics Institute
5000 Forbes Ave., Pittsburgh, PA 15213
Institute established in 1979 to conduct basic and applied research on robotics technologies relevant to industrial and societal tasks.

Consumer Technology Association
1919 S. Eads St., Arlington, VA 22202
Advocacy group for entrepreneurs and technology developers.

Defense Advanced Research Projects Agency
3701 N. Fairfax Drive, Arlington, VA 22203
Defense Department agency whose mission is to maintain the technological superiority of the U.S. military by sponsoring research that bridges fundamental discoveries and their military use.

Information Technology Industry Council
1101 K St., N.W., Suite 610, Washington, DC 20005
Advocacy group for technology companies.

Machine Intelligence Research Institute
2030 Addison St., #300, Berkeley, CA 94704
Organization that seeks to ensure that artificial intelligence has a positive impact on humankind.

San Francisco, CA
Research company focused on developing safe AI; sponsored by Microsoft, Amazon, Tesla founder Elon Musk and venture capitalist Peter Thiel, among others.

Software & Information Industry Association
1090 Vermont Ave., N.W., Sixth Floor, Washington, DC 20005-4905
Trade association for software companies, including those developing AI products.

Go to top


[1] Ted Roelofs, “Broken: The human toll of Michigan's unemployment fraud saga,” Bridge, Feb. 7, 2017,

[2] Robert N. Charette, “Michigan's MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold,” IEEE Spectrum, Jan. 24, 2018,; David Eggert, “State of Michigan apologizes for unemployment fiasco, wants to reduce penalties,” The Associated Press,, Jan. 30, 2017,

[3] Roelofs, op. cit.; Jack Lessenberry, “State unemployment computer had anything but the golden touch,” Traverse City Record Eagle, Dec. 31, 2017,

[4] Lili Cheng, “Why You Shouldn't Be Afraid of Artificial Intelligence,” Time, Jan. 4, 2018,

[5] “Artificial Intelligence Poised to Double Annual Economic Growth Rate in 12 Developed Economies and Boost Labor Productivity by up to 40 Percent by 2035, According to New Research by Accenture,” press release, Accenture, Sept. 28, 2016,

[6] Catherine Clifford, “Bill Gates: ‘A.I. can be our friend,’” CNBC, Feb. 16, 2018,

[7] “Cathy O'Neil: Do Algorithms Perpetuate Human Bias?” TED Radio Hour, NPR, Jan. 26, 2018,

[8] Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review, 2017, p. 117,

[9] “Joy Buolamwini, How Does Facial Recognition Software See Skin Color?” TED Talks, NPR, Jan. 26, 2018,

[10] Steve Lohr, “Facial Recognition Is Accurate, if You're a White Guy,” The New York Times, Feb. 9, 2018,

[11] Aaron Glantz and Emmanuel Martinez, “Detroit-area blacks twice as likely to be denied home loans,” Detroit News, Feb. 15, 2018,; Virginia Eubanks, “The dangers of letting algorithms make decisions in law enforcement, welfare, and child protection,” Slate, April 30, 2015,

[12] “Court software may be no more accurate than web survey takers in predicting criminal risk,” press release, Eurekalert, Jan. 17, 2018,

[13] Aaron Smith and Monica Anderson, “Automation in Everyday Life,” Pew Research Center, Oct. 4, 2017,

[14] “Artificial Intelligence, Automation, and the Economy,” The White House, Dec. 20, 2016,

[15] Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies,” Harvard Journal of Law & Technology, Spring 2016, p. 365,

[16] “Speech analysis software predicted psychosis in at-risk patients with up to 83 percent accuracy,” press release, Mount Sinai Hospital, Jan. 22, 2018,

[17] “Algorithm tool works to silence online chatroom sex predators,” press release, Purdue University, April 17, 2018,

[18] Alex Hern, “Stephen Hawking: AI will be ‘either best thing or worst thing’ for humanity,” The Guardian, Oct. 19, 2016,

[19] Diana Budds, “Biased AI Is A Threat To Civil Liberties,” Co.Design, July 25, 2017,

[20] Adam Liptak, “Sent to Prison by a Software Program's Secret Algorithms,” The New York Times, May 1, 2017,

[21] Budds, op. cit.

[22] Cameron Langford, “Houston Schools Must Face Teacher Evaluation Lawsuit,” Courthouse News Service, May 8, 2017,

[23] Natasha Singer, “How Companies Scour or Digital Lives for Clues to Our Health,” The New York Times, Feb. 25, 2018,

[24] Ibid.

[25] Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016).

[26] Kumba Sennaar, “How America's Top Four Insurance Companies are Using Machine Learning,” Tech Emergence, June 18, 2018,

[27] Michal Kosinski and Yilun Wang, “Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images,” Journal of Personality and Social Psychology, American Psychological Association, 2018,

[28] Samuel Gibbs, “Elon Musk: regulate AI to combat ‘existential threat’ before it's too late,” The Guardian, July 17, 2017,; Kurt Wagner, “Elon Musk just told a group of America's governors that we need to regulate AI before it's too late,” Recode, July 15, 2017,

[29] “ITI AI Policy Principles,” Information Technology Industry Council, Oct. 24, 2017,

[30] Testimony of Amir Khosrowshahi before the House Committee on Oversight and Government Reform, Subcommittee on Information Technology, Feb. 14, 2018,

[31] Heidi Vogt, “Should the Government Regulate Artificial Intelligence?” The Wall Street Journal, April 30, 2018,

[32] Scherer, op. cit., p. 356.

[33] Jacques Bughin et al., “Artificial Intelligence: The Next Digital Frontier?” McKinsey & Company, June 2017, pp. 9–10,; “The 2016 AI Recap: Startups See Record High In Deals And Funding,” CB Insights, Jan. 19, 2017,

[34] Ben Casselman, “Robots? Training? Factories Tackle the Productivity Puzzle,” San Francisco Chronicle, June 28, 2018,

[35] “Experts Predict When Artificial Intelligence Will Exceed Human Performance,” Emerging Tech from the arXiv, MIT Technology Review, May 31, 2017,

[36] Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerization?” Oxford Martin, Sept. 17, 2013,

[37] Melanie Arntz, Terry Gregory and Ulrich Zierahn, “The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis,” Organisation for Economic Co-operation and Development, June 16, 2016,

[38] Testimony of Charles Isbell before the House Committee on Oversight and Government Reform, Subcommittee on Information Technology, Feb. 14, 2018,

[39] Testimony of Gary Shapiro before the House Committee on Oversight and Government Reform Subcommittee on Information Technology, Feb. 14, 2018,

[40] Lisa Beilfuss, “The Future Robo Adviser: Smart and Ethical?” The Wall Street Journal, June 19, 2018,

[41] “Artificial Intelligence and Life in 2030,” One Hundred Year Study on Artificial Intelligence, Stanford University, September 2016,

[42] Ibid. For background, see Sarah Glazer, “Universal Basic Income,” CQ Researcher, Sept. 8, 2017, pp. 725–48.

[43] “Artificial Intelligence, Automation and the Economy,” op. cit., p. 2.

[44] Souvik Das, “The Origin and Evolution of Algorithms,” Digit, May 3, 2016,

[45] Ibid.; A.M. Turing, “Computing Machinery and Intelligence,” Mind, pp. 433–460,

[46] “Turing Machine,” Encyclopedia Britannica,

[47] “Atanasoff-Berry Computer,” Computer History Museum,

[48] “Konrad Zuse,” Computer History Museum,

[49] Michael R. Swaine and Paul A. Freiberger, “Eniac,” Encyclopedia Britannica,

[50] “1951 — SNARC Maze Solver — Minsky/Edmonds (American),”,

[51] J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” Aug. 31, 1955,

[52] Joseph Weizenbaum, “ELIZA — A Computer Program For the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, January 1966,

[53] “Joseph Weizenbaum, professor emeritus of computer science, 85,” MIT News, March 10, 2008,

[54] “Shakey,” Artificial Intelligence Center,

[55] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014), p. 7.

[56] Avron Barr and Shirley Tessler, “Expert Systems: A Technology Before Its Time,” Stanford University, undated,

[57] Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993 (2002).

[58] P.W. Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (2010), p. 58.

[59] Bruce Weber, “Swift and Slashing, Computer Topples Kasparov,” The New York Times, May 12, 1997,

[60] “Aibos History,” Aibos,

[61] “Tired of Chasing Dustballs? Let a Robot Do the Job,” The New York Times, Sept. 26, 2002,

[62] “Spirit and Opportunity,” Program & Missions, NASA,

[63] Alex Davies, “An Oral History of the Darpa Grand Challenge, the Grueling Robot Race That Launched the Self-Driving Car,” Wired, Aug. 3, 2017,

[64] “Urban Challenge,” Defense Advanced Research Projects Agency,

[65] Anahad O'Connor, “Watson Dominates ‘Jeopardy’ but Stumbles Over Geography,” The New York Times, Feb. 15, 2011,

[66] Kirsten Korosec, “Google self-driving cars arrive in Austin,” Fortune, July 7, 2015,

[67] Donald A. DePalma, “Lionbridge Announces Availability of GeoFluent Machine Translation,” Common Sense Advisory, April 12, 2011,

[68] Ava Mutchler, “Voice Assistant Timeline: A Short History of the Voice Revolution,”, July 14, 2017,

[69] Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future (2015), p. 95.

[70] Dylan Love, “Here's the burger-flipping robot that could put fast-food workers out of a job,” Business Insider, Aug. 11, 2014,

[71] “Industrial robot sales increase worldwide by 29 percent,” International Federation of Robotics,

[72] “Global industrial robot sales rose 27 pct in 2014,” Reuters, March 22, 2015,

[73] “North American Robotics Market has Strongest Year Ever in 2014,” Robotic Industries Association, Feb. 4, 2015,

[74] Amit Chowdhry, “Facebook's DeepFace Software Can Match Faces With 97.25% Accuracy,” Forbes, March 18, 2014,

[75] Alex Woodie, “Inside Sibyl, Google's Massively Parallel Machine Learning Platform,” Datanami, July 17, 2014,

[76] Will Knight, “The Dark Secret at the Heart of AI,” Technology Review, April 11, 2017,; David Gunning, “Explainable Artificial Intelligence (XAI),” Defense Advanced Research Projects Agency,

[77] Dipayan Ghosh, “Beware of A.I. in Social Media Advertising,” The New York Times, March 26, 2018,

[78] Sheera Frenkel, “Scholars Have Data on Millions of Facebook Users. Who's Guarding It?” The New York Times, May 6, 2018,

[79] Lauren Etter and Sarah Frier, “Facebook App Developer Kogan Defends His Actions With User Data,” Bloomberg, March 21, 2018,

[80] “Mark Zuckerberg Testimony: Senators Question Facebook's Commitment to Privacy,” The New York Times, April 10, 2018,

[81] Ibid.

[82] “Game Changers: Artificial Intelligence Part I, Artificial Intelligence and Public Policy,” Subcommittee on Information Technology, Feb. 14, 2018,

[83] “Self Drive Act,” National Governors Association, July 27, 2017,

[84] “US Politicians Call for ‘Future of AI Act’, May Shape Legal Factors,” Artificial Lawyer, Dec. 18, 2017,

[85] “White House Creates AI Committee, Favors Light Regs, Education,” MeriTalk, May 11, 2018,

[86] Mark MacCarthy, “EU privacy law says companies need to explain the algorithms they use,” CIO, Oct. 19, 2017,

[87] Julia Powles, “New York City's Bold, Flawed Attempt to Make Algorithms Accountable,” The New Yorker, Dec. 20, 2017,

[88] Elizabeth Zima, “Could New York City's AI Transparency Bill Be a Model for the Country?” Government Technology, Jan. 4, 2018,

[89] Dom Galeon and Christianna Reedy, “Kurzweil Claims That the Singularity Will Happen by 2045,” Futurism, Oct. 5, 2017,

[90] “Experts Predict When Artificial Intelligence Will Exceed Human Performance,” op. cit.

Go to top

About the Author

Patrick Marshall, author of this week's edition of CQ Researcher  

Patrick Marshall, a freelance policy and technology writer in Seattle, is a technology columnist for The Seattle Times and Government Computer News. He has a bachelor's degree in anthropology from the University of California, Santa Cruz, and a master's degree in international studies from the Fletcher School of Law and Diplomacy at Tufts University.

Go to top

Document APA Citation
Marshall, P. (2018, July 6). Algorithms and artificial intelligence. CQ researcher, 28, 561-584. Retrieved from
Document ID: cqresrre2018070600
Document URL:
ISSUE TRACKER for Related Reports
Artificial Intelligence
Jul. 06, 2018  Algorithms and Artificial Intelligence
Sep. 25, 2015  Robotics and the Economy
Jan. 23, 2015  Robotic Warfare
Apr. 22, 2011  Artificial Intelligence
Nov. 14, 1997  Artificial Intelligence
Aug. 16, 1985  Artificial Intelligence
May 14, 1982  The Robot Revolution
Computers and the Internet
Congress Actions
Consumer Behavior
Consumer Protection and Product Liability
Lobbying and Special Interests
Manufacturing and Industrial Production
Motor Traffic and Roads
Motor Traffic Safety
Public Transportation
Science and Politics
Unemployment and Employment Programs
No comments on this report yet.
Comment on this Report