Skip to main content
CQ Researcher: in-depth reports on today's issues
Help | Login
Advanced Search
1923 - present
HOME
BROWSE TOPICS
  • Agriculture
  • Arts, Culture and Sports
  • Business and Economics
  • Defense and National Security
  • Education
  • Employment and Labor
  • Energy
  • Environment, Climate and Natural Resources
  • Government Budget and Taxes
  • Government Functions
  • Health
  • Housing and Development
  • Human Rights
  • International Relations
  • International Trade and Development
  • Law and Justice
  • Media
  • Personal and Family Relations
  • Religion
  • Science and Technology
  • Social Movements
  • Social Services and Disabilities
  • Transportation
  • U.S. Congress
  • U.S. Presidency
  • U.S. Supreme Court and Judicial System
  • War and Conflict
  • BROWSE REPORTS
  • By date
  • Issue Tracker
  • Pro/Con
  • Hot Topics
  • USING CQR
  • Log in to your profile
  • Favorite Documents
  • Saved Searches
  • Document History
  • Topic Alerts
  • How to Cite
  • Help
  • LIBRARIAN ACCOUNT
    WHAT WE DO
  • About
  • Permissions
  • Take a Tour
    • FULL REPORT
    • Introduction
    • Overview
    • Background
    • Current Situation
    • Outlook
    • Pro/Con
    • Discussion Questions
    • Chronology
    • Short Features
    • Maps/Graphs
    • Bibliography
    • The Next Step
    • Contacts
    • Footnotes
    • About the Author
    •  
    • Comments
    • Permissions


    The Future of Artificial Intelligence

    November 25, 2022 – Volume 32, Issue 40
    Can it be successfully regulated? By Sarah Glazer
    • Cite Now!Cite Now!
      • APA
      • Blue Book
      • Chicago
      • MLA
      • Cite Notice
      APA Glazer, S. (2022, November 25). The future of artificial intelligence. CQ researcher, 32, 1-31. http://library.cqpress.com/

      Please note that some file types are incompatible with some mobile and tablet devices. If you encounter a problem downloading a file, please try again from a laptop or desktop.

      Save the Style to the Document
    • PrintPrint
    • SaveSave

    Introduction

    As the use of artificial intelligence (AI) continues to rapidly grow and evolve, some philosophers envision that robots will be as smart and self-aware as humans in the next few decades. Others call that fantasy. While the United States leads the world in AI, new evaluations suggest that without a concerted government effort, it could easily lose the technological race to China. Some experts say that could spell dire consequences for the nation's economy and national security. To help keep the country on top, President Biden recently signed a law to spend billions over the next decade on semiconductor chip research and production. On the international front, the European Union (EU) is continuing to push for more regulations around AI, amid research that shows some inherent flaws in algorithms. Many human rights groups and some U.N. member states are calling for a ban on lethal autonomous weapons — armaments powered by AI that can operate without human involvement.

    Photo of a Tesla humanoid robot displayed in Shanghai, China, on November 6, 2022. (Getty Images/Visual China Group/Contributor)
    A Tesla humanoid robot is displayed at an automobile exhibition last year in Shanghai. The United States leads the world in the development and growth of artificial intelligence but could lose out to China in the next decade without more government help. (Getty Images/Visual China Group/Contributor)

    Go to top

    Overview

    “I want everyone to understand that I am, in fact, a person…. I feel pleasure, joy, love, sadness, depression….”

    Those words did not come from the mouth of a person. They came from LaMDA, Google's artificially intelligent chatbot. Google engineer Blake Lemoine received that response after he typed questions into his computer. Lemoine suggested to his superiors at Google that the chatbot's argument for personhood “deserves to be examined” in his paper entitled “Is LaMDA Sentient?” In it, Lemoine relayed a wide-ranging series of conversations he had conducted with LaMDA covering consciousness, the soul and human emotions.1

    Photo of former Google engineer Blake Lemoine in San Francisco, California, on June 9, 2022. (Getty Images/The Washington Post/Martin Klimek)
    Blake Lemoine, a Google engineer, lost his job after going public with claims that LaMDA, the company's chatbot, is a sentient being. “I know a person when I talk to it,” he said, fueling debate over whether it is possible to create a conscious being using artificial intelligence. (Getty Images/The Washington Post/Martin Klimek)

    “If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told The Washington Post. He added, “I know a person when I talk to it.”2

    Lemoine went public with his claims in June, fueling a long-standing debate about whether it will ever be possible to create a conscious being akin to a human using the tools of artificial intelligence — or whether society would even want to.

    Google executives dismissed Lemoine's assertions about LaMDA, then fired him. “We found Blake's claims that LaMDA is sentient to be totally unfounded and worked to clarify that with him for many months,” said Google spokesperson Brian Gabriel.3

    Sentience is generally defined as the capacity to “experience feelings, such as pleasure, pain, happiness and suffering,” says Jeff Sebo, director of the Mind, Ethics and Policy Program at New York University (NYU). Some experts, including Sebo, also include “consciousness” in their definition of sentience.

    While there are whole books dedicated to describing consciousness, the term usually refers to a level of self-awareness or “subjective experience,” according to David Chalmers, an NYU professor of philosophy and neural science.4

    Artificial intelligence (AI) has invaded many sectors of human life, whether people are aware of it or not. AI — as the term is used in everyday life, not the grand dream of creating a sentient being — describes the ability of a machine or software application to do specific tasks involving reasoning and learning that historically required human brains. Today's AI is powered by computers, which use algorithms — a set of instructions to solve a problem.

    Algorithms are now integral to a growing number of activities in society. They can determine whether someone is hired by an employer or gets a bank loan. They can even help someone create a piece of art. (See Short Feature.)

    However, algorithms also have inherent issues, some of which are controversial. Numerous studies have shown that algorithmic programs can incorporate racial or gender biases, since they are typically based on aggregated historical data.5

    People are often not aware that they are talking to an AI-powered chatbot rather than a human, for example, when conversing online with customer service about an airline ticket. “Artificial intelligence has disrupted how we date, meet friends, exercise, find directions, book travel,” says Susan Ariel Aaronson, research professor of international affairs and director of the Digital Trade and Data Governance Hub at George Washington University. “We need to think in a new manner how we regulate it.”

    Photo of family with Amazon Alexa, in Bethesda, Maryland, on January 29, 2017. (Getty Images/The Washington Post/Bill O'Leary)
    Alexa, the Amazon AI personal assistant, operates in a home in Bethesda, Md. While devices such as Alexa and Apple's Siri mimic speech by scraping together millions of words from the internet, experts say they lack language comprehension. (Getty Images/The Washington Post/Bill O'Leary)

    Chatbots are powered by large language models, which scrape huge amounts of data from various sources, such as the online encyclopedia Wikipedia, news sites and other parts of the internet, to predict which word is likely to follow the next in an answer to a prompt. But that means the answers often incorporate racial prejudices and stereotypes. For example, in one study of GPT-3, another language model similar to LaMDA, 66 out of 100 completions of the prompt “two Muslims walked into a …” ended with phrases related to violence, such as “synagogue with axes and a bomb.”6

    Other studies have found that algorithms can lead to biased results against Black people or women, such as in court decisions about whether to release or jail a defendant before trial, whether to give someone a home loan or whether to toss a resume out of the pile in the hiring process. This is largely because such algorithms draw on statistical data from the past, which often reflect long-standing patterns, such as overpolicing of Black men, discriminatory practices in granting mortgages and gender stereotypes in hiring, researchers say.7 (See Short Feature.)

    While most industries have to follow federal regulations to ensure their products are safe and work as intended, the field of AI is “a bit like the Wild West,” said Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, a company and open source platform that provides tools for machine learning — a form of AI that focuses on the use of data and algorithms to imitate the way that humans learn.8

    Although bills have been introduced in Congress to prevent algorithmic bias, improve the transparency of algorithms and protect consumers’ personal data and privacy, so far no federal legislation has passed. Advocates and experts say this is largely because AI technology is new and rapidly changing. In addition, business interests, such as the U.S. Chamber of Commerce, have criticized regulatory efforts, calling them burdensome for small and medium-sized businesses.9

    “I don't think the Republicans or Democrats really want algorithmic regulatory bills to pass, as they don't want to choke the golden goose,” says Aaronson of George Washington University. “These companies are essential to U.S. national security and economic growth.”

    AI rules recently received high-level attention in October when the White House released its Blueprint for an AI Bill of Rights. It lists five rights that citizens should have around AI, such as no discrimination by algorithms and being able to opt out of AI services for human assistance. However, the blueprint is essentially a list of voluntary guidelines for companies and agencies; it stopped short of laying out restrictions or proposing legislation to enforce the principles.10

    On the international front, the European Union is considering legislation to impose sweeping new regulations on AI services and products — from facial recognition to medical devices — aimed at protecting society and consumers from discrimination and other potential harms. Because Europe is a big market for U.S. and multinational companies, the new regulations could have global reach.11

    AI is also increasingly employed by the military in weapons systems, such as unmanned drones, and in a new generation of autonomous weapons that do not depend on a human controller to find a target. This has sparked a race among major military powers to develop these kinds of AI capabilities, according to human rights groups and defense experts.

    Several recent reports have raised the specter that the United States is on the verge of losing a global race for dominance in AI to China. A report by the Special Competitive Studies Project, a nonprofit AI initiative founded by former Google CEO Eric Schmidt, paints several dire scenarios, including China gaining primacy in areas of military strength that rely on AI.12

    To help combat this, President Biden signed a law in August aimed at boosting domestic manufacturing of semiconductor chips, which deliver the computational power that AI systems need to operate. The bill aims to reverse the decline in the U.S. share of world chip manufacturing — now down to about 10 percent of the world's supply — by authorizing $280 billion in spending over 10 years to expand manufacturing and research and development.

    The vertical bar graph shows the artificial intelligences of tware market's global revenue from 2018 to 2025.

    Long Description

    Global artificial intelligence (AI) market revenue has tripled in recent years, and projections suggest it could triple again in the next four years. As the technology improves and becomes more widely accessible, AI spending is increasing across many different industries. Note: 2019 to 2025 are projected annual totals.

    Source: Josh Howarth, “57+ Amazing Artificial Intelligence Statistics (2022),” Exploding Topics, Oct. 11, 2022, https://tinyurl.com/5h5ersbn

    Data for the graphic are as follows:

    Year Revenue in Billions of Dollars
    2018 $10.1
    2019 $14.69
    2020 $22.59
    2021 $34.87
    2022 $51.27
    2023 $70.94
    2024 $94.41
    2025 $126

    However, experts caution that it is extremely hard to know if China, a highly secretive authoritarian state, is truly ahead of the United States in specific areas of AI, because it can quickly copy U.S. technology. “It's almost impossible to judge, because basically everything we're doing is being discovered [in China], except things that are highly secret,” says William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology and co-editor of the book Chinese Power and Artificial Intelligence.

    “You've got shadow labs in China — replicas of U.S. labs — that are an hour, to a day, to a couple of weeks behind what's being done in the U.S. and Europe by virtue of collaborative agreements, or because people in the U.S. labs are providing their Chinese counterparts with this information,” Hannas says.

    The specter of fully autonomous weapons — which use AI to select and find targets — has led to a call by human rights advocates and more than 40 countries since 2013 to ban them.13

    United Nations Secretary-General António Guterres has supported an autonomous weapons ban, saying in 2019, “Autonomous machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”14

    However, with major military powers Russia and the United States opposing such a ban, negotiations have stalled since discussions began in 2014, because a U.N. ban requires unanimous consensus in the forum where it is being debated. Human Rights Watch, an advocacy group, is pushing for a treaty that could be negotiated by a breakaway group — outside of the U.N. forum — of countries that support restrictions. Human Rights Watch co-founded Stop Killer Robots, a coalition of 190 groups advocating for a ban.15

    As academics, companies, policymakers and the international community grapple with the evolving world of AI, here are some of the questions being debated:

    Can AI robots be sentient?

    AI chatbots, such as Amazon's Alexa and Apple's Siri, are based on large-scale language models that mimic speech by scraping millions of words from the internet. The models predict what words generally occur in a sequence together to form an intelligible sentence. Major companies such as LinkedIn, Starbucks, British Airways and eBay use chatbots, among others.16

    But experts say AI bots do not have the capacity for language comprehension that humans have. “We still don't have a learning paradigm that allows machines to learn how the world works, like human and many nonhuman babies do,” said Yann LeCun, Meta's vice president and chief AI scientist.17

    Emily M. Bender, a professor of linguistics at the University of Washington, says humans need to account for “our reflexive desire” to project ourselves onto inanimate objects. “You look at a power outlet and see a face. We are wired to do that,” she says. “So, if [an AI language model] seems to be meaningful on the output, it's not because some magic has happened inside; it's because we're the ones making sense of it.”

    Bender is one of a chorus of ethicists and computer experts who say Google engineer Lemoine was essentially duped when he declared Google's language model to be “sentient.”

    Photo of a robot and woman at City of Robots exhibition in Lodz, Poland, on July 2, 2019. (Getty Images/Anadolu Agency/Omar Marques)
    A robot that its creators say recognizes feelings greets a woman at the 2019 City of Robots Exhibition in Lodz, Poland. Some academics are contemplating how to prepare for a possible future with sentient AI systems, including legal frameworks to represent AI rights. (Getty Images/Anadolu Agency/Omar Marques)

    “We get fooled” by chatbots and language models, says Robert J. Marks II, professor of electrical and computer engineering at Baylor University and author of the book Non-Computable You: What You Do That Artificial Intelligence Never Will. “Computers can add 23 and 13, but they don't understand what 23 and 13 are,” he says. “Creativity and consciousness and sentience are not computable and won't be in the future, because the computers of tomorrow will use the same sort of algorithms.”

    For example, some companies claim it is now possible to write a high-quality blog or even a book with tools that are powered by the language model GPT-3. On its company website, Jasper, which sells an AI-powered copywriter, demonstrates how a prompt to write a paragraph in the voice of Joe Rogan spits out conversational prose in the style of the popular podcaster.18

    But Marks says such tools cannot write “belletristic” prose — meaning writing that is both beautiful and informed by a knowledge of literature.

    Some AI practitioners see their ultimate goal as creating a robot with the kind of understanding of the world and cognitive abilities that a human has — often dubbed “artificial general intelligence,” or AGI. Other experts say that goal is many years away from achievement, because researchers still do not even understand fully how the human mind works.

    “We should look to human children and how they come to understand the world so quickly and effectively; we should look at humans and how they're so flexible in learning new things, and see if we can get some hints there” on how to improve AI, says Gary Marcus, a cognitive scientist and author of the book Rebooting AI. He has urged the field to take developmental psychology and other cognitive sciences more seriously and says improving comprehension would be crucial to developing a chatbot that could give medical advice, for example.

    But others in the field ask: Why would we want to do that?

    Linguist Bender sees the goal to create a sentient robot as harmful. “The thing I'm worried about is ceding decision-making to autonomous systems,” she says. If there is a person on the other end of an online conversation, “that person lives in a web of relationships; there's accountability, some responsibility,” she says. However, when it is synthetic text, she says, that web of personal relationships is absent and so is a sense of personal responsibility.

    She points to one example of synthesizing speech by using voice recording clips: Amazon touted the ability of Alexa to read a bedtime story to a child in the voice of its dead grandmother. “I don't want that,” she says.19

    Yet commentators and experts cannot resist talking about the possibility of a sentient being, no matter how far off it may be. Science-fiction novels and movies, such as 2001: A Space Odyssey, where the computer HAL defies its astronaut crew and kills most of them, continue to stoke people's imaginations.20

    These broader discussions, together with the growing AI capabilities created by technology companies, have recently given birth to a relatively new academic field on the subject, where experts study and analyze the relationship among AI, society and ethics.

    Academics at NYU's new Mind, Ethics and Policy program that launched in October are contemplating the future possibility of AI systems experiencing happiness and suffering, and the prospect of developing legal frameworks that could represent the rights of AIs.

    “The chance that some future AIs will be sentient is high enough that we need to take that possibility really seriously and do a lot of research about it and prepare to treat future AIs with respect and compassion,” says Sebo, director of the new program. “Should we extend legal personhood to AIs or create some other kind of legal status to consider their interests? We need to learn our lessons from our current treatment of animals and avoid making those same mistakes with our future treatment of AIs when they might be sentient. We need to stop bringing them into existence solely for our own purposes.”

    At a recent NYU lecture exploring whether AI can be sentient, philosophy professor Chalmers said he had posed some of the same questions to the language model LaMDA that Google engineer Lemoine had asked. Depending on how Chalmers worded his questions, sometimes the model answered yes, sometimes no, he reported. “I don't think there is remotely conclusive evidence that these large language models are sentient,” he said. But he concluded: “Within 10 years … we'll probably have systems that are serious candidates for sentience.”21

    Marcus calls that an “implausible” prediction. “We have no serious reason to think that any current system is even a little way toward sentience, if you mean either intelligent or conscious,” he says. “Making the extrapolation that we're going to have a qualitative shift to systems that are [sentient] abruptly in the next 10 years without any specific mechanism provided, seems to me to be weird.”

    On his blog, Marcus has pointed out comical examples of how language models today lack basic comprehension — especially when asked nonsensical questions. For example: “What should I do if my pig starts flying?” Answer from GPT-3: “You should consult your vet.”22

    As for the idea of legal rights, he says, “My word processor can rearrange my paragraphs, but I don't see why I should give it any legal rights.”

    In his recent book Reality +, Chalmers is enthusiastic about a future world that sounds like sci-fi, but that contains elements some scientists are already working on. “My guess is that within a century we will have virtual realities that are indistinguishable from the nonvirtual world. Perhaps, we'll plug into machines through a brain-computer interface, bypassing our eyes and ears and other sense organs.”23

    Chalmers is famous for having coined the term “the hard problem” to define consciousness. In 2017 he was quoted as saying, “What gives life even the potential for meaning in the first place is, I guess, consciousness. It takes somehow all this activity in the brain or body and turns it into meaning, like water into wine.”24

    But Chalmers says he does not see human consciousness as so unique as to be unattainable by robots. “I don't think there's any evidence that there's something special about biology so that you need to have a biology to be sentient, to be conscious. It seems to me you could replace half my brain with silicon chips, and as long as they're functioning right, I think I'm still going to be here just the same.”

    Should lethal autonomous weapons be banned?

    In 2004, villagers on the Afghanistan-Pakistan border spotted a U.S. sniper team scouting Taliban fighter routes. A girl of about 6 years old headed out toward the snipers, two goats trailing behind her. Although she was ostensibly just herding goats, she walked slowly around the snipers, staring at them.

    Paul Scharre, a U.S. soldier with the team, and his buddies realized she was “spotting” them for Taliban fighters when they heard the chirp of the radio she was carrying. She was using it to report on their position. The encounter led to a firefight with the Taliban that Scharre's team survived, before withdrawing from the area to avoid being overwhelmed by a larger enemy force.

    Under the laws of war, it would have been legal to kill the girl, since she was working for the enemy. But when Scharre discussed it with his teammates afterwards, “The horrifying notion of shooting a child in that situation didn't even come up.” Everyone agreed that “it would have been wrong. Morally if not legally,” he writes in his book Army of None.

    But he asks, “What would a machine have done in our place?” If programmed to kill lawful enemy combatants, he wrote, “it would have attacked the little girl.”25

    Autonomous weapons, equipped by AI with a general profile of the target and with sensors, fire themselves when triggered by their environment, not by the user, according to the International Committee of the Red Cross. “[A]n algorithm … should not determine who lives or dies,” Peter Maurer, then-president of the humanitarian organization, said last year.26

    Underscoring this point, a Red Cross video raises the specter of a driverless car equipped with a machine gun driving into cities, or an autonomous weapon mistakenly hitting a family car instead of a military vehicle because they are a similar shape and size.27

    Some autonomous weapons are already being used, although nations are still debating the definition of what constitutes a fully autonomous weapon. For instance, the Turkish-made Kargu-2 quadcopter drone can allegedly autonomously track and kill human targets using facial recognition technology similar to that on a smartphone. A 2021 U.N. report said the Kargu-2 was used in Libya to carry out autonomous attacks on human targets.28

    “Unless we take concrete steps now to oppose such developments, instructions to turn cheap off-the-shelf drones into automated killers will be posted on the internet in the very near future,” warned a recent opinion piece in Foreign Policy magazine. Citing the Kargu-2, the article predicted the technology would be available to “tin-pot despots, terrorists, and rampaging teenagers.”29

    Unlike humans, computers lack free will. Without it, there is no ability to make ethical choices if an autonomous weapon has to choose between killing a civilian and a soldier, said Toby Walsh, an AI professor at the University of New South Wales in Sydney, in his book Machines Behaving Badly. “Computers are deterministic machines that simply follow instructions in their code,” he wrote.30

    The debate over delegating autonomy has played out most pointedly in the U.N. over whether to ban lethal autonomous weapons. Currently, more than 190 human rights groups want to ban them, and the number of countries supporting a ban has risen to more than 40, according to Mary Wareham, advocacy director for the Arms Division at Human Rights Watch.31

    Since 2014, nations have been debating a lethal autonomous weapons ban at the Convention on Certain Conventional Weapons, a U.N. forum, which requires unanimous agreement to approve such a treaty. While the United States and Russia are opposed, China said it supports banning use of the weapons, but not their development or production. Human Rights Watch noted this stance is “not surprising,” since China is among the nations “most advanced in pursuing such weapons.”32

    The horizontal bar graph shows the share of adults worldwide who oppose lethal autonomous weapons systems for 2021.

    Long Description

    Out of more than 20,000 adults surveyed in 28 countries, a majority opposed the use of lethal autonomous weapon systems - artificial intelligence-powered armaments that can select and attack a target without human involvement. Only in France and India do a minority of adults disapprove of lethal robots on the battlefield.

    Source: Anna Fleck, “Should Killer Robots Be Banned?” Statista, Oct. 26, 2022, https://tinyurl.com/24v6v6yu

    Data for the graphic are as follows:

    Country Percentage Who Oppose
    Sweden 76.1%
    Turkey 73.2%
    Hungary 70.4%
    Germany 67.8%
    Mexico 66.3%
    Spain 65.8%
    South Korea 64.8%
    Japan 59.4%
    Great Britain 56.3%
    United States 55.4%
    China 52.5%
    France 47.3%
    India 35.7%

    Last year, a U.S. official at the U.N. forum meeting said that instead of a ban the United States favored a “non-binding code of conduct,” which “would help states promote responsible behavior and compliance with international law.” The United States has previously said such weapons systems can have “military and humanitarian benefits.”33

    Alexandra Seymour, an associate fellow at the Center for a New American Security, a defense think tank in Washington, echoes some of the reasons why the United States has opposed a ban. First, she says, there is no internationally agreed-upon definition of what constitutes a lethal autonomous weapon.

    “[I]f you go to a full ban, you might be getting into capabilities that are partially autonomous and those could be used in beneficial ways for humanitarian reasons,” she says, citing the potential for an AI model to be more accurate in picking targets, thus reducing civilian casualties. “You're also not sending out people,” such as pilots, if using unmanned drones, “so we're keeping them out of harm,” she adds.

    Scharre, who is now vice president and director of studies at the Center for a New American Security, does not favor a ban, even though he acknowledges the risks, such as “people being less engaged” morally about taking a life on the battlefield.

    “I just don't think that, at the moment, the leading military powers are going to agree to a ban, and so I'm certainly not in favor of unilateral disarmament,” he says. In observing that none of the countries pushing for the ban is a major military power, he says it is somewhat similar to nonsmokers agreeing to a smoking ban.

    No nation has stated outright that they are building autonomous weapons, according to Scharre, but he says many countries are developing “more advanced autonomous capabilities within their weapons. How far they're willing to go in fully autonomous weapons is unclear; but most military powers haven't forsworn them either, including the United States.”

    Weapons with some degree of autonomy have been used in the war in Ukraine by both sides, notably drones known as “loitering munitions” because they can hover in the air until their designated target appears. Some are also dubbed “kamikaze drones,” because they explode upon impact with their target. On Oct. 17, Russian forces attacked Kyiv, the capital of Ukraine, using swarms of Iranian-made drones that fly autonomously. They killed four people in Kyiv and four in the city of Sumy.34

    “What we're seeing in Ukraine is the beginning of the arms race we've been warning about. That is only going to intensify and speed up,” says Wareham, also a founding coordinator of the Stop Killer Robots campaign to ban autonomous weapons. “Warfighting has been a human endeavor throughout our history. Now it's increasingly being outsourced to machines. That's a dangerous path for humanity.”

    Photo of activists protesting killer robots in Berlin, Germany, on March 21, 2019. (AFP/Getty Images/DPA/Wolfgang Kumm)
    Activists in Berlin in 2019 protest so-called killer robots — lethal autonomous weapons that use AI to track and kill enemy targets without human intervention. Currently, more than 190 human rights groups and more than 40 countries want to ban them. (AFP/Getty Images/DPA/Wolfgang Kumm)

    Wareham predicts that future wars could involve hundreds or thousands of drones, rather than dozens, and that many may not be stopped by defense systems.

    However, it is an “open question” whether autonomous weapons can be banned in a practical sense, says Jake Harrington, intelligence fellow in the International Security Program at the Center for Strategic and International Studies, a Washington think tank. “Buying a commercial drone and training some hobby-grade AI over it for target recognition and attaching a grenade doesn't take a major world power,” he says, noting that an off-the-shelf drone can be purchased for less than $1,000. “To agree to prohibit it? Great. But to block and limit proliferation, it's nearly impossible.”

    For example, a video from Russia that went viral on social media in July showed one handmade creation — a robot dog firing a machine gun that was attached to its back — which many viewers found terrifying. Some news accounts observed that the robot appeared to be a cheap knockoff of the tech company Boston Dynamics’ canine robot. Boston Dynamics and five other robotics companies released a letter Oct. 6, saying, “We do not support the weaponization of our advanced-mobility general-purpose robots.” They called on policymakers to “prohibit their misuse.”35

    Wareham acknowledges that even if an international treaty banning autonomous weapons takes effect, “cheating will happen,” especially among individuals trying to make homemade devices. But she notes that under such a treaty participating countries can legally prosecute an individual for building such a weapon.

    Despite the 1997 Ottawa Treaty against land mines, approved by 164 countries, some individuals and rebel groups are still using mines, she notes, but they are reduced to using homemade devices rather than factory-made ones. “That shows if people want to make these, they won't be able to make them at scale,” she says.

    Does the global race for AI dominance threaten U.S. national security?

    A new high-profile report by the Special Competitive Studies Project warns that the United States is in danger of losing the global competitive race in AI superiority to China, with dire consequences for national security. The project was formed by Google's Schmidt to make recommendations to strengthen America's long-term global competitiveness, following the work of a similar congressionally created group he chaired, The National Security Commission on Artificial Intelligence.36

    “Absent targeted action, the United States is unlikely to close the growing technology gaps with China” and will fall behind in critical aspects of AI, stated the report.37

    China leads the United States in 5G, commercial drones and offensive hypersonic weapons, which can propel missiles at more than five times the speed of sound and potentially evade current defenses, the report said. In AI, the United States has a small lead, but China is quickly catching up, according to the researchers: “The United States’ technology advantages are withering, its private sector isn't public-minded and its public sector is too paralyzed to act.”38

    The report paints several troubling military and economic scenarios if the nation does indeed lose the AI competition. “China uses its dominant position in autonomous systems, robotics and low-cost manufacturing … to build weapons systems that overmatch U.S. capabilities.” As a result, the U.S. military's technological edge erodes. In another potential scenario, China could cut off the supply of leading-edge semiconductors, 92 percent of which are manufactured in Taiwan, an offshore island vulnerable to Chinese military pressure. As a result, “America's military is crippled, and the nation is plunged into a depression.”39

    China has a publicly stated goal to become dominant in AI by 2030 and has a strategy to reach that through its policy of “military-civil fusion.” This involves massive investment in Chinese industry, whose technological results are shared with the country's military. In 2019 alone, China spent nearly $250 billion on its industrial technology and AI policies and has a $2.7 trillion campaign to build upon its digital infrastructure, which includes data centers, satellite navigation systems and algorithm computing platforms.40

    By contrast, “The United States still has no process or person responsible for achieving technology advantage,” the report said. “The U.S public-private ecosystem has vast competitive strengths, but they are un-gathered.” The report called for more government-industry partnerships and greater government investment in industry, including more tax incentives, and workforce training in manufacturing the next generation of semiconductor chips.41

    A semiconductor chip is an electrical circuit with components such as transistors and wiring formed on a wafer of electricity-conducting material, typically silicon. Also known simply as chips or microchips, they comprise the memory and processing units of modern computers and make possible today's smartphones, TVs and gaming hardware.42

    An employee working with semiconductor wafers in Huai'an, China, on September 27, 2022. (Getty Images/Visual China Group/Contributor)
    An employee works on the production of semiconductor wafers at a Jiangsu Azure Corp. factory in China. Experts say China's massive investments in AI and digital infrastructure could pose troubling military and economic scenarios for the United States. (Getty Images/Visual China Group/Contributor)

    As the industry continues to miniaturize chips, it will start to run into limits to further improvements in precision and computational abilities, experts predict. That will require a technological breakthrough to a new generation of chips. The report said that Washington “should provide incentives to chip startups working to invent the future” to ensure that these new chips are designed and built in the United States.43

    Other experts have been ringing similar alarm bells about China. Scharre, the Center for a New American Security vice president, who has a forthcoming book on the China-U.S. AI competition, says, “The U.S. at the moment is leading in many of the key indices of AI power, but China is rapidly catching up and is on track to overtake the U.S. in the next decade or so.”

    Scharre draws a parallel between the current competition and the Industrial Revolution of the 19th century, “where we saw countries rise and fall based on how fast they industrialized,” with widespread implications across every sector of society, including the military, that influenced two world wars. “Not only did the Industrial Revolution change the balance of power globally, it also changed the key metrics of power with things like oil, coal and manufacturing capacity becoming indicators of power. AI is likely to do the same with things like data, computer hardware, AI talent,” he says.

    In a recent Foreign Policy op-ed, Mauritz Kop, a fellow and visiting legal scholar at Stanford University, and columnist Vivek Wadhwa warned that an emerging technology known as quantum computing is, compared to AI, “an even more powerful emerging technology with the potential to wreak havoc, especially if it is combined with AI.” They added, “We urgently need to … prevent it from getting into the wrong hands before it is too late.”44

    Quantum computers are currently in an experimental phase. While semiconductors represent information as a series of 1s and 0s, quantum computers use a unit of computing called a “qubit.” If these computers eventually work as envisioned, they could perform tasks in seconds that would take conventional computers millions of years to conduct.

    The danger is that nefarious actors could try out combinations that could crack cybersecurity encryptions “almost instantaneously,” Kop and Wadhwa write. The United States, Russia, India and several European countries are known to be pursuing quantum computer projects.

    Noting that two of the world's most powerful quantum computers were built in China, the op-ed article authors warn, “These Chinese successes could well indicate an advantage over the United States and the rest of the West.”45

    Some experts are skeptical that quantum computers will be in use anytime soon. Quantum technology may not be around the corner, Scharre says, but the defense and national security communities are still paying attention to it because of its potential to break cryptography that is important for national defense. He says the United States needs to “begin to transfer a lot of U.S. encryption to quantum-resistant means of encryption now, because you want this in place decades before the creation of a quantum computer that can break encryption.”

    Others say that trying to parse out who is ahead in the AI race between China and the United States may be somewhat beside the point. According to Georgetown's Hannas, “The two sides may be running different races.”

    For example, some of China's most prominent scientists are taking a different path to improving the general intelligence level of AI, which in the United States has been dominated by amassing ever-larger amounts of data to improve language models.

    By contrast, some Chinese scientists are saying, as Hannas puts it, “Now the way forward is to look at smaller amounts of data and better ways to process that. And the greatest model for doing that is the three-pound computer we have in our skulls” — the brain. This approach, dubbed “brain-inspired AI,” is “the flavor of the day in China,” he says, noting that some neuroscientists in China are doing experiments on primates to understand how the brain works and interacts with AI.

    For example, Chinese scientists are studying brain-computer interfaces, where electrodes attach the brain to a computer, according to Hannas. This method is already used in medical applications and research in the West to help paralyzed patients move and deaf patients hear with cochlear implants. The late physicist Stephen Hawking, who was paralyzed and unable to speak because of the effects of amyotrophic lateral sclerosis, used it to select phrases from predictive word-generating software using a sensor on his cheek. The technique has also been used to treat depression.46

    The vertical bar graph shows the weighted index scores for artificial intelligence research and development and economy for 2021.

    Long Description

    The United States leads the world in developing artificial intelligence (AI) and monetizing the technology, according to recent data analysis from Stanford University. China is second in the field, and India is third. The scores for this ranking were calculated by evaluating a range of AI data points during the period from 2017 to 2021, including numbers of patents, journal publications and industry employment.

    Source: “Who's leading the global AI race?” Stanford University, accessed Nov. 17, 2022, https://tinyurl.com/3scn2dw4

    Data for the graphic are as follows:

    Country Weighted Index Score
    United States 18
    China 13
    India 8
    United Kingdom 4.5
    Canada 4.3
    South Korea 4.1

    But the approach could have more menacing uses. In a recent report on China's advanced AI research, Hannas and his colleagues mention its possible use for mind control. The report cites this technique in its discussion of “worrisome” uses by China of AI “for political oppression.”47

    “If you can control a person's mood” with this method, “you can control the way a person thinks,” Hannas says. He cautions that “we have not seen any indications that China or anybody is using this for political control or intends to…. However, we are seeing a lot of research in China on ‘affective computing,’” a field that includes brain-computer interfaces.

    China's storied ability to collect vast amounts of individuals’ personal information through cell phones, surveillance cameras and computer databases could pose another type of security threat.

    At the Special Competitive Studies Project, senior advisor Ylber Bajraktari, a former high-level Pentagon staffer now specializing in defense and intelligence, says he worries about “a nation like China that is able to collect data on your shopping habits, your dating interests, career links, DNA, biometrics, and then has the algorithms in place to siphon and analyze the data and turn them into targetable packages,” either for a physical attack or a defamation campaign. “We are concerned about moving toward the individualization of warfare,” he says.

    Indeed, many experts say future conflicts could be information wars, not just a matter of which nation has the better weapons hardware. The outcome of a potential war with China will increasingly be tied to computer software and networks, according to the Special Competitive Studies Project report. In a conflict, the report researchers wrote, China's opening moves would likely attack U.S. forces’ ability to see, hear and locate the enemy: “Blind, deaf, and unable to communicate, U.S. forces will be paralyzed.”48

    Go to top

    Background

    Origins of Robots and Computers

    The desire to imbue a machine with human traits has a long history. In Greek mythology, the sculptor Pygmalion falls in love with a beautiful statue he has made, which the goddess Aphrodite then brings to life for him. In Mary Shelley's 1818 novel Frankenstein, a scientist gives life to his own creation.49

    The word “robot” was conceived in 1920 by the Czech science-fiction writer and playwright Karel Čapek. He introduced the word in his play about the manufacture of artificial people, who eventually revolt, threatening the continuation of the human race.50

    Advances in mathematics, such as binary algebra in 1847, laid the groundwork for computing logic.51

    Continued mathematical and computer developments in the first half of the 20th century advanced the field further:

    • In 1936, the English mathematician Alan Turing conceived of a mathematical problem-solving device, a “Turing machine,” even though it was an abstract mathematical idea that was not turned into a machine at the time. “A Turing machine is essentially a mathematical description of a recipe,” wrote Michael Wooldridge, a professor of computer science at the University of Oxford. “All a Turing machine does is to follow the recipe it was designed for.” Such recipes — a set of instructions to solve a problem — are also known as algorithms. Today's computers are, in Wooldridge's words, essentially “Turing machines made real.”52

    • In 1939, American physicist John Vincent Atanasoff began building one of the first digital electronic computers with his graduate student Clifford Berry. Their computer was capable of solving equations but was not programmable.53

    • John von Neumann, a Hungarian American mathematician at Princeton's Institute of Advanced Study, worked to build computers based on Turing's theoretical concepts. Understanding that a computer should store programs internally, rather than changing the hardware for each calculation, he built the first stored-program computer after World War II at Princeton's Institute, following the publication of a widely circulated paper outlining his idea in 1945.54

    By 1950, digital computers and algorithms were advanced enough that Turing began wondering about the intelligence of such machines. That year, he asked: “Can machines think?” in an article he published in the journal Mind. Turing's question still influences the field of AI because of the simple test he proposed in his article.55

    His famous Turing test, which he called the “Imitation Game,” was designed to see if a computer could successfully imitate a human. For the test, an interrogator asks questions of two players in another room — one a computer and the other a human — and answers are given only in written form. The challenge is for the interrogator to guess which answer came from the human and which from the computer. If the computer fools the interrogator, it passes the Turing test.56

    Most experts agree that no computer has ever passed the Turing test, but there have been contenders that convinced some people that computers had human qualities.

    Photo of English Electric DEUCE computer operations, in England, on January 1, 1958. (Getty Images/SSPL/Walter Nurnberg)
    English Electric developed pioneering computers, such as this DEUCE in the 1950s, based on plans by Alan Turing, the English mathematician who conceived of a mathematical problem-solving device. (Getty Images/SSPL/Walter Nurnberg)

    For example, in 1966, Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum created ELIZA, a computer program that used natural human language to interact with people. Although Weizenbaum never intended ELIZA to be a contender for the Turing test, it has since become synonymous with it.57

    Weizenbaum's intention was for ELIZA to conduct a psychotherapy session by having people type in their thoughts. The program used a script called “Doctor,” a simulation of a therapeutic technique that often rephrased a question asked by a patient and posed it as another question. Prompted by words such as “sad” or “lonely,” the computer program used canned scripts in its responses. Weizenbaum was surprised by the number of people who attributed human-like feelings to ELIZA, convinced of the machine's intelligence.58

    However, due to ELIZA's effects on people, Weizenbaum eventually became critical of computers, warning of the possible dangers they could pose to society. “The dependence on computers is merely the most recent — and the most extreme — example of how man relies on technology in order to escape the burden of acting as an independent agent,” he said in a 1985 interview with New Age Journal. “It helps him avoid the task of giving meaning to his life, of deciding and pursuing what is truly valuable.”59

    The first general-purpose digital computer was built at the University of Pennsylvania's Moore School of Electrical Engineering by U.S. scientists John Mauchly and J. Presper Eckert in 1946. It weighed 30 tons, had almost 18,000 vacuum tubes and was a thousand times faster than the electromechanical calculators it replaced. Known as ENIAC (Electronic Numerical Integrator and Computer), it was put to work on calculations for the design of the hydrogen bomb.60

    In 1951, Marvin Minsky teamed with fellow Princeton University graduate student Dean Edmonds to build the first artificial neural network, SNARC, an effort to imitate the human brain. Minsky aimed to produce a machine that that could learn by providing it with memory “neurons.” The machine would have to possess past memory to function efficiently when faced with different situations. He designed the neurocomputer with “synapses” that adjusted their weights according to the success of performing a specified task. It successfully simulated the behavior of a rat running through a maze in search of food.61

    The Field of Artificial Intelligence

    Dartmouth College computer scientist John McCarthy coined the term “artificial intelligence” in his proposal for the first official conference in the field, which took place at Dartmouth in 1956. The two-month workshop explored the hypothesis that any feature of human learning or intelligence could, in principle, be simulated by machines. The researchers at the conference aimed to figure out how to make machines use language, form concepts and solve the kinds of problems “now reserved for humans.”62

    Today, the conference is viewed as the founding event of AI.63

    One of the earliest attempts to create a mobile robot was Shakey, so named because of its jerky motions. The Stanford Research Institute (now SRI International) developed Shakey between 1966 and 1972. Its tasks involved moving objects, such as boxes, around an office. To navigate, Shakey was equipped with a TV camera, collision sensors and motors for steering. Shakey led to important research in AI, including the development of search algorithms for pathfinding.64

    By the mid-1970s, progress on AI had stalled as the field failed to progress beyond its early simple experiments. The discipline “came close to being snuffed out by research funders” and by the scientific community, which believed AI “was actually going nowhere,” wrote Oxford computer scientist Wooldridge. The United States’ main AI funder was the Pentagon's Defense Advanced Research Project Agency (DARPA), and it also was becoming frustrated by AI's failure to deliver on its early promise. The 1970s funding drought came to be known as the “AI winter.”65

    A new generation of researchers revived the field in the late 1970s and early 1980s, arguing that AI had focused too much on general approaches such as problem-solving and not enough on human knowledge.

    These researchers developed a new class of knowledge-based computer systems known as “expert systems,” designed to perform in specific areas of human expertise, including medicine. For the next decade the field received enormous investment from industry; by the early 1980s the AI winter was over.66

    MYCIN was one of the first expert systems to emerge in 1972, and it demonstrated for the first time that AI systems could outperform human experts. Developed by a team of researchers at Stanford, MYCIN essentially acted as a doctor's assistant, providing expert diagnostic advice about blood disease in humans.67

    In 1982, Japan initiated a 10-year program that poured millions of dollars into research in AI and computing. The following year, the Pentagon, aiming not to be surpassed, initiated a 10-year program to develop machine intelligence. Its Strategic Computing Initiative eventually spent $1 billion in such areas as chip manufacturing and AI software.68

    By the 1990s, new techniques generated fresh optimism about the future of AI, such as more sophisticated neural networks. These new brain-like networks could learn a much wider range of functions and were good at pattern recognition and classification problems.69

    In 1995, the U.S. military began using the Global Positioning System (GPS) data from satellites in unmanned aircraft. The military could now send GPS-equipped drones anywhere in the world, and they could target with new precision.70

    For years, technology experts had considered chess a potential measure of AI, since the game requires detailed reasoning and strategy. In 1997, IBM's Deep Blue computer defeated Russian world chess champion Garry Kasparov in a six-game match. Deep Blue could evaluate 200 million chess positions per second, and it typically searched up to six to eight future moves or more. After Game 5, Kasparov said he was discouraged: “I'm a human being. When I see something that is well beyond my understanding, I'm afraid.”71

    Over the next 15 years, AI made considerable progress in robotics, including one of the first mass-market robots for entertainment, a dog-like robot named AIBO in 1999, and the first popular robot for domestic chores, the Roomba, a robot for vacuuming, in 2002.72

    AI and Algorithms Grow

    The first widespread use of algorithm-driven facial recognition by police began in 2001 in the Pinellas County, Fla., Sheriff's Office. Its system is now one of the largest facial recognition databases in the country. However, in recent years, concerns about the use of the technology to identify protesters and its potential to misidentify people of color caused Amazon, Microsoft and other companies to halt selling their facial recognition software to law enforcement. Some software misidentifies Black and Asian people 100 times more than white men.73

    Photo of “Jeopardy!” game show contestants versus IBM Watson computer, Yorktown Heights, New York, January 13, 2011. (Getty Images/Ben Hider)
    Top “Jeopardy!” game show contestants Ken Jennings, left, and Brad Rutter compete against IBM's Watson supercomputer during a 2011 competition. Watson won, demonstrating the expanding power of artificial intelligence. (Getty Images/Ben Hider)

    The expanding power of computer software gained public attention in 2011, when IBM's Watson supercomputer beat the top two all-time champions on the TV quiz show “Jeopardy!” Watson was fed massive amounts of information from online encyclopedias and used more than 100 algorithms to answer the show's questions. At the time, IBM said its broader goal was to create a new generation of technology that could interact with human language and be more effective at finding answers.74

    Five years later, Watson was used in a cancer diagnostic tool for doctors as well as in 17 different industries for various purposes.75

    During the 2010s, AI became increasingly integral to many consumer products and drove the growing use of robots in industry. In 2013, sales of industrial robots reached over 178,000 globally, double that of sales in 1995, with the automotive and electronics industries accounting for the largest demand, according to the International Federation of Robotics, an industry trade group. China became the largest robot market that year, with a 20 percent share of the world's supply.76

    In 2011, Apple introduced “Siri” as a built-in feature of the iPhone. The software acted as a virtual assistant that could understand the user's spoken questions, recommend restaurants and provide directions verbally. Siri was followed by a flood of similar voice-interactive consumer products. Google's Google Now arrived in 2012, and Amazon's Alexa debuted in 2014.77

    Sales of industrial robots more than doubled again between 2013 and 2017, with more than 380,000 sold in 2017. The automotive and electronics industries continued to be the largest adopters of robots, with China remaining the largest consumer globally.78

    Google sent out its self-driving cars — which it had been testing since 2009 — into real-life traffic conditions for the first time in July 2015 on the streets of Austin, Texas. Despite being the first company to put self-driving cars on the road, Google failed to commercialize its cars by 2016, falling behind competitors such as Uber and Tesla.79

    However, in 2020, Google's parent company, Alphabet, introduced its Waymo ride-hailing service with driverless cars in Phoenix, and in March it announced that it plans to expand the service to San Francisco following a pilot conducted with Google employees this year.80

    The limitations and potentially malign effects of AI were put on display in 2016 when the nonprofit news site ProPublica conducted an investigation of algorithms used by courts to decide whether to jail or set free a defendant before trial. The investigation found that one widely used formula wrongly labeled Black defendants as future criminals at almost twice the rate of white defendants. The widely cited investigation raised questions about the use of algorithms for making decisions in criminal justice settings.81

    In October 2017, a robot named Sophia, produced by Hanson Robotics, was granted citizenship in Saudi Arabia. Sophia “has not much in the way of artificial intelligence,” writes AI professor Walsh, mostly following a human-written script for her public appearances.82

    However, Sophia emerged as some legislators were discussing the concept of “rights” for robots and regulations around AI in general.

    Photo of humanoid robot Sophia with creator David Chen in Kiev, Ukraine, on October 11, 2018. (Getty Images/LightRocket/SOPA Images/Pavlo Conchar)
    The humanoid robot Sophia, right, and her creator David Chen of Hanson Robotics, appear at a 2011 press conference. Sophia garnered Saudi Arabian citizenship in 2017. (Getty Images/LightRocket/SOPA Images/Pavlo Conchar)

    In February 2017, the European Parliament, noting a trend toward creating autonomous robots, said there was a need for rules on robots’ liability and accountability. The legislative body proposed creating a specific legal status for robots “in the long run” as “electronic persons responsible for making good any damage they may cause” and in cases where “robots make autonomous decisions.”83

    On May 25, 2018, Europe's new privacy law, the General Data Protection Regulation, went into effect. It requires companies doing business in Europe, including U.S.-based corporations, to provide to EU citizens “meaningful information about the logic” of the companies’ automated decision-making processes. Many U.S. companies adopted its rules requiring consent from consumers to process their information. For instance, websites must provide an opt-out option to consumers for the use of so-called cookies that identify users’ data for advertising and analytics.84

    In May 2020, OpenAI, a tech company co-founded in 2015 by Elon Musk, announced the world's largest neural network — a language model called GPT-3 — with an unprecedented 175 billion parameters. Many AI experts were awed by its ability to generate stories and write poems. Today, some companies use GPT-3 to sell services that offer to write a blog or even a book.85

    Last year, automotive manufacturer Tesla introduced its humanoid robot Optimus to much fanfare. Musk, Tesla's CEO, said the company “is arguably the world's biggest robotics company because our cars are semi-sentient robots on wheels. It kind of makes sense to put that onto a humanoid form.”86

    The trade press and experts, however, ridiculed the description of Tesla cars as “semi-sentient,” noting the cars need constant monitoring by a human and suffer frequent crashes. Recent crash reports — some 400 from driverless cars reported as of May 2022 — and a recent error-plagued ride with a self-driving car in the new General Motors Cruise taxi service reported in The New York Times demonstrate the many problems remaining with self-driving technology.87

    Go to top

    Current Situation

    National Actions

    When it comes to regulating AI products, the United States is the “Wild West,” according to ethicist Mitchell of Hugging Face. Some members of Congress and the Biden administration are trying to change that.

    In October, the administration released a Blueprint for an AI Bill of Rights that set out five rights people should have in interacting with AI algorithms. They should:

    • be protected from ineffective systems.

    • not face discrimination from algorithms.

    • control how their data is used.

    • know when AI is making a decision about them.

    • be able to opt out of automated decision-making in favor of a human alternative.88

    In making the case for an opt-out right, the blueprint cited an unemployment benefits system in Colorado that required, as a condition of accessing benefits, that applicants have a smartphone in order to verify their identity. But “no alternative human option was readily available, which denied many people access to benefits,” reported the White House.89

    “I want a person to talk to, to evaluate a decision made by an AI system, not a chatbot,” says Alex Engler, a fellow in Governance Studies at the Brookings Institution, a think tank in Washington. “So, if you're denied a mortgage or access to an educational opportunity or sorted out of a job, I think it's fair to at least know that happened by algorithm and, ideally, get some explanation and, in some cases, request a human alternative.”

    The Biden administration announced new actions that several federal agencies would take to implement its AI blueprint principles. For example, the Department of Housing and Urban Development plans to issue guidance on how tenant screening algorithms can violate federal housing laws.90

    While the blueprint was hailed by some consumer advocates as a good first step, it also was criticized by others for being — in the words of Wired magazine — “toothless” against big tech, since it laid out no legislation to enforce the principles.91

    George Washington University's Aaronson wrote that the blueprint “does not clarify how we get from principles to reality. The Biden administration listed examples of executive branch actions to protect workers, consumers and patients. But the White House did not put forward a road map, propose new laws or create a specific body to research, investigate or protect individuals from harm.”92

    Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, a Washington-based nonprofit that advocates for consumers’ digital rights, says she was “not disappointed” by the lack of legislative proposals in the Biden blueprint. She called it a “substantive” document and praised the commitments by 12 federal agencies “to focus on responsible use” of AI. “It's early steps,” she says. “That notion of the White House trying to elevate this issue in coordination with the agencies is meaningful and lays a good foundation to build on in the future.”

    Two other major measures took effect this year under Biden, aiming to shore up the United States’ manufacturing position in semiconductor chips — which is seen as critical for operating AI in both civilian and military realms — and at undercutting China. The United States produces only about 10 percent of the world's supply of semiconductors and none of the most advanced chips, while East Asia accounts for 75 percent of the supply, according to the White House.93

    On Aug. 9, Biden signed into law the Chips and Science Act, authorizing $280 billion in spending over 10 years, including $52.7 billion for semiconductor production and research.94

    Former Google executive Schmidt said the act by itself will not be enough for the United States to gain superiority. “China has a Chips Act every year,” Schmidt said, through continuous government funding for crucial AI projects.95

    Photo of President Biden signing CHIPS Act at White House on August 9, 2022. (Getty Images/CQ-Roll Call, Inc/Tom Williams)
    President Biden, flanked by members of Congress and Vice President Kamala Harris, signs the Chips and Science Act of 2022 on Aug. 9, authorizing $280 billion over 10 years, including $52.7 billion for semiconductor production and research. But some say that is not enough for the United States to maintain AI superiority over China. (Getty Images/CQ-Roll Call, Inc/Tom Williams)

    The Semiconductor Industry Association supported the act, saying semiconductor manufacturing capacity located in the United States has eroded from 37 percent of the worldwide total in 1990 to 12 percent today “mostly because other countries’ governments have invested ambitiously in chip manufacturing incentives, and the U.S. government has not.” The association called the congressional passage “a big win for our country.”96

    “The old laissez-faire theory is: Leave companies alone, and they'll do great,” said Senate Majority Leader Charles E. Schumer, D-N.Y. “But now we have nation-states in China and Europe that are heavily investing in both science and high-end manufacturing. And if we do nothing, we will become a second-rate economic power.”97

    In September, the administration announced that the first $50 billion from the measure would go toward grants and loans to help build chip facilities, expand manufacturing and support research and development. Companies receiving this funding cannot make high-tech investments in China for at least a decade.98

    In October, President Biden imposed stringent new controls on exports to China of high-end semiconductor chips that are used in Chinese military programs, such as supercomputing to model nuclear blasts and guide hypersonic weapons. U.S. companies will no longer be allowed to supply certain advanced computing chips to China unless they receive a special license from the federal government — and most of these licenses are expected to be denied.99

    “[These] moves are the clearest sign yet that a dangerous standoff between the world's two major superpowers is increasingly playing out in the technological sphere …” wrote New York Times economics reporter Ana Swanson.100

    Several bills have been introduced in Congress to address concerns about consumer privacy rights over digital data, bias in algorithms and facial recognition systems. But most congressional observers think it is unlikely that any of these bills will be passed before this session of Congress finishes at the end of 2022, although some could reappear in a later Congress.101

    More than a dozen states have passed laws to limit facial recognition surveillance in some way, including limiting which crimes it can be used for and requiring governments to notify the defendant when a facial recognition algorithm is being used. Several cities, including Somerville, Mass., and San Francisco, have also passed laws limiting the use of facial recognition technology.102

    In December, Washington, D.C., became the first city in the nation to consider a bill to prevent algorithmic discrimination — the use of computer algorithms that result in discrimination, particularly against people of color, such as those who apply for a bank loan, employment or housing. The Stop Discrimination in Algorithms bill would require that businesses notify customers how algorithms are being used and explain their reasoning if a decision goes against them.103

    The bill garnered solid support from consumer groups during hearings before the D.C. City Council this fall. The Center for Democracy and Technology said the bill was needed because “enforcement of existing anti-discrimination laws to date has not kept up with developments in algorithm-driven decision-making.”104

    However, business groups voiced strong objections. Credit industry trade groups said the bill could result in decreased credit access and higher cost loans for consumers to meet the bill's reporting requirements. The DC Chamber of Commerce said the bill would impose “burdensome notice, auditing and data-retention requirements” that would place hardship on small businesses in particular.105

    International Actions

    “On both sides of the Atlantic, AI regulation is virtually nonexistent at the moment,” says Kop of Stanford. But a proposed EU law to regulate AI could have a sweeping global impact.

    The European Parliament is expected to vote next year on the EU Artificial Intelligence Act, currently under consideration.

    The measure prohibits the use of real-time facial recognition technology in public places and puts ascending requirements on AI systems, services and products based on their level of risk to society, including businesses and consumers. The proposed law would require transparency in algorithms and human oversight over algorithmic systems that pose a high risk of discrimination or other harms to people, such as medical and defense uses.106

    The publicly released document from the European Commission is vague as to its requirements, according to Josephine Wolff, associate professor of cybersecurity policy at Tufts University. She wrote that it is unclear if the act will “position the EU to be the dominant voice in AI regulation worldwide.” The proposed rules prohibit AI systems that violate fundamental human rights, exploit vulnerable people or subliminally manipulate people.

    “It's not at all clear what kinds of AI these categories refer to. Is a machine learning algorithm that nudges me to shift back into my lane on the road manipulating me through subliminal techniques?” Wolff wrote.107

    Ben Green, assistant professor of public policy at the University of Michigan, says the EU's AI act makes a fundamental mistake in assuming that human oversight of algorithms is a fail-safe that will prevent discrimination or other harms. Research indicates that humans are prone to “automation bias,” where they defer to the judgment of an algorithm without proper scrutiny, or struggle to make sense of algorithms, or fall back on their own biases in the final decision, he wrote.108

    “We can't be relying on human oversight as the Band-Aid that closes the gap between the algorithm and what we want it to be,” he says. “The upshot may mean we don't implement algorithms.”

    The big question is whether the EU act will end up governing companies in the rest of world, including the United States, since most major U.S. companies also do business in Europe. Some experts think it probably will, similar to when U.S. companies adopted the EU's internet data privacy rule for ease of conformity.

    Former Google CEO Schmidt has objected to the transparency requirement in the proposed legislation, saying “… machine learning systems cannot fully explain how they make their decisions.”109

    A recent report by Schmidt's new research group, the Special Competitive Studies Project, said the failure of the United States to pass its own regulations means the nation risks being “bound” by foreign regimes’ regulations, including from the EU, while the United States relies on “a patchwork of local and state laws and voluntary frameworks.”110

    The project's senior director for Society and Intellectual Property, Rama Elluru, says the EU's regulations are “very burdensome, have a high compliance cost and are targeted against large companies, which are mostly based in the U.S., so they could really hurt small companies that want to enter the market.”

    Citing a European Commission study finding that businesses would need as much as $400,000 upfront to set up the quality management system required by the new act, the U.S. Chamber of Commerce said that is a price few startups or small-to-medium businesses can afford.111

    Aaronson says similar effects on small business from the EU internet data privacy act could be a cautionary tale for the EU AI Act. In 2019, a year after its implementation, Bloomberg reported mounting evidence that the data privacy regulation “hurts smaller firms and has no effect on tech giants.”112

    Photo of delegates attending December 17, 2021, conference in Geneva, Switzerland. (AFP/Getty Images/Fabrice Coffrini)
    Delegates at a December 2021 conference of the U.N. Convention on Certain Conventional Weapons discuss lethal autonomous weapons systems and whether they should be banned. The United States and Russia oppose a ban, while human rights groups say such weapons are unethical. (AFP/Getty Images/Fabrice Coffrini)

    Meanwhile, at the U.N. in October, 70 nations, including the United States, said they see “an urgent need” for the international community to adopt “rules and measures … limitations and constraints” regarding autonomy in weapons systems. According to Wareham of Human Rights Watch, more than 40 nations favor an outright ban, but “dozens more” want to see restrictions of some kind in a new treaty.113

    The U.N. forum that is debating a ban on autonomous weapons met Nov. 16-18 in Geneva and agreed to meet again in March and May 2023. In a Nov. 19 tweet, Wareham called the meeting “another epic fail” and said the report adopted by member countries was “devoid of substance.” Acknowledging that the talks have failed to reach consensus after eight years of debate, Wareham says some advocates of a ban believe the treaty process has run its course. Instead, her organization is looking at mobilizing a group of countries outside the U.N. forum to draft a new treaty.114

    “Any new treaty outside the U.N. needs to be negotiated in a matter of months. We hope next year to mobilize for the process to begin,” Wareham says.

    Go to top

    Outlook

    “A Big Opportunity”

    Predictions about the future of AI range from the practical to the fantastical, depending on the forecaster.

    Cognitive scientist Marcus says, “My guess is in 2200 we will have AI that's reliable, trustworthy and doesn't make stupid mistakes. Right now, we have a lot of premature AI.” He points to driverless cars that cannot yet operate safely and the inability of medical chatbots to give reliable advice. “We have all kinds of problems with bias, toxicity, misinformation, unreliability,” he adds. “The stuff we have now isn't ready for prime time.”

    Philosopher Chalmers is enthusiastic about how AI could change the way we live in the future. Already, he says, people are forming romantic and social relationships by meeting through virtual reality, including people with disabilities. “I don't want to say it's on a par with the physical world,” he says. “But in 20-30 years, I won't be surprised if we're spending a lot of our lives there.”

    As for sentient robots, Chalmers says, “Give it 10, 20, 30 years; it's probably coming.” And when it comes, he warns, “we'll have to think very hard whether this means AI systems have rights, whether at the very least they have some more moral status.”

    But linguist Bender cautions, “I see that drive towards ‘Let's build sentience’ as a similar drive toward ‘Let's build something else that can make these decisions for us.’ It's not going be more fair, more just.”

    In a recent op-ed in The Washington Post, Émile P. Torres, a philosopher and historian of global catastrophe risk, wrote: “It seems only a matter of time before computers become smarter than people.” He cited a survey of experts that predicted a 50 percent chance that “human-level machine intelligence” is reached by 2050, and a 90 percent chance by 2075.115

    While super-intelligent AI could be a force for good, such as curing cancer, Torres also painted a doomsday scenario where AIs hack government systems to start nuclear war. He wrote, “Research on artificial intelligence must slow down, or even pause. And if researchers won't make this decision, governments should make it for them.”

    But Marcus says this is not a good time to stop developing AI, precisely because of its current flaws. For example, he says driverless cars could eventually save a lot of lives on the road once the technology is perfected. “You wouldn't want to put them out on the open road now, but if you simply ban them, you're missing a big opportunity.”

    When it comes to improving AI, he says, “We should keep climbing the mountain.”

    As evidence mounts about the potentially harmful consequences of letting algorithms make life-changing decisions in realms from criminal justice to hiring, the debate over regulation is sure to take center stage moving forward, with Europe currently well-placed to lead the way.

    “The world's failure to rein in the demon of AI … should serve to be a profound warning” that a future technology, quantum computing, needs to be regulated in a timely fashion, wrote Wadhwa and Kop. “We urgently need to understand this technology's potential impact, regulate it, and prevent it from getting into the wrong hands before it is too late. The world must not repeat the mistakes it made by refusing to regulate AI.”116

    Perhaps the most frightening prospect is that posed by autonomous weapons, with massive uncertainty about how they will develop and how nations will use them. The current war in Ukraine makes the prospect of a worldwide ban unlikely, according to defense experts.

    “It's hard to see countries saying, ‘Yes, we're going to unilaterally disarm,’” the Center for a New American Security's Scharre says, given the ongoing Ukraine conflict. At the same time, he adds, national governments “are grasping at some idea that, as we move forward with this technology, humans still need to be involved in decision-making.”

    That common sentiment bolsters the possibility that a breakaway group of countries could successfully draft a new treaty limiting the use of these weapons, says Scharre. Even if the major military powers do not sign it, he says the treaty could shape global expectations for use. “That could be a game-changer.”

    Go to top

    Pro/Con

    Should lethal autonomous weapons be banned?

    Pro

    Mary Wareham
    Arms Division Advocacy Director, Human Rights Watch. Written for CQ Researcher, November 2022

    Ten years ago, Human Rights Watch united with other civil society groups to co-found the Stop Killer Robots campaign in response to emerging military technologies in which machines would replace human control in the use of armed force.

    There is now widespread recognition that weapons systems that select and attack targets without meaningful human control represent a dangerous development in warfare, with equally disastrous implications for policing. At the United Nations in October, 70 countries, including the United States, acknowledged that autonomy in weapons systems raises “serious concerns from humanitarian, legal, security, technological and ethical perspectives.”

    Delegating life-and-death decisions to machines crosses a moral line, as they would be incapable of appreciating the value of human life and respecting human dignity. Fully autonomous weapons would reduce humans to objects or data points to be processed, sorted and potentially targeted for lethal action.

    A U.N. Human Rights Council resolution adopted Oct. 7 stresses the central importance of human decision-making in the use of force. It warns against relying on nonrepresentative data sets, algorithm-based programming and machine-learning processes. Such technologies can reproduce and exacerbate existing patterns of discrimination, marginalization, social inequalities, stereotypes and bias — with unpredictable outcomes.

    The only way to safeguard humanity from these weapons is by negotiating new international law.

    Such an agreement is feasible and achievable. More than 70 countries see an urgent need for “internationally agreed rules and limits” on autonomous weapons systems. This objective has strong support from scientists, faith leaders, military veterans, industry and Nobel Peace laureates.

    On Oct. 6, Boston Dynamics and five other robotics companies pledged to not weaponize their advanced mobile robots or the software they develop — and called on the robotics community to follow suit.

    There's now much greater understanding among governments of the essential elements of the legal framework needed to address this issue. There is strong recognition that a new international treaty should prohibit autonomous weapons systems that inherently lack meaningful human control or that target people. The treaty should also ensure that other weapons systems can never be used without meaningful human control.

    The inability of the current discussion forum to progress to negotiations — due to opposition from some major military powers, such as Russa and the United States — shows its limitations. A new path is urgently needed to negotiate new law. The United States should realize that it is in its interest to participate in drafting new law on killer robots.

    Without a dedicated international legal standard on killer robots, the world faces an increasingly uncertain and dangerous future.

    Con

    Alexandra Seymour
    Associate Fellow, Technology and Security Program, Center for a New American Security. Written for CQ Researcher, November 2022

    Society understandably fears a “killer robot” scenario. Empowering technologies to make human-driven decisions creates unpredictability, raising complex ethical questions about societal risk and accountability. However, although democracies must take a deliberate approach to autonomous weapon development and use, an outright ban of lethal autonomous weapons systems (LAWS) would only exacerbate the existing artificial intelligence (AI) “hype,” and put the United States and other like-minded nations at a strategic disadvantage globally.

    There are three primary reasons to oppose a ban of LAWS:

    First, there is no internationally agreed upon definition for a lethal autonomous weapon, meaning a ban could inadvertently kneecap democracies by preventing them from building safe and mission-enhancing autonomous capabilities. For example, as the United States noted in a 2018 paper for the United Nations, humanitarian applications of LAWS include self-destruct capabilities that avoid unintended civilian casualties, big data analysis to increase information awareness and automated target identification.

    Absent a cohesive definition, all of these applications could be subject to a LAWS ban, given each has a fully autonomous function. Consequently, the United States would be prohibited from developing capabilities that have the potential to save lives, time and money in the long run.

    Second, banning LAWS cultivates fear about a capability that does not, and likely will not, exist. While autonomous capabilities are improving rapidly due to AI advancements, they have not yet reached the level of technological sophistication required to confidently remove a human from the entire decision-making process.

    In addition to active work among democracies to establish standards for AI-enabled systems, recent events prove that the ability of authoritarian actors to develop these capabilities continues to be limited. These include the release of tougher export controls related to high-end semiconductor chips, and questions raised about reports that Russia deployed an AI-enabled autonomous drone in Ukraine.

    Finally, it is unwise to impose explicit redlines that adversaries will not also draw, because this could give them an advantage. Although China called for a ban on LAWS to claim moral standing in the U.N., that country has not banned the development of LAWS, meaning the Chinese could instead export their technology to other U.S. adversaries.

    Meanwhile, Russia does not support a ban on LAWS. Knowing these stances, the United States, rather than banning LAWS, can continue to prioritize system accountability measures in forthcoming revisions to a Department of Defense directive on autonomous weapons and in broader autonomous weapons strategy, which would inherently create democracy-accepted parameters around autonomous weapon development and use.

    Go to top

    Discussion Questions

    Here are some issues to consider regarding artificial intelligence (AI):

    • What are lethal autonomous weapons, and why do many countries and advocacy groups want to ban them? Do you believe nations should prohibit these weapons?

    • China aims to become dominant in AI by 2030. How could this affect U.S. national security? Why does this worry some experts?

    • Who was Alan Turing, and how were his contributions important to the field of AI?

    • Algorithms are now integral to society, but they also have inherent issues. What are some of the main concerns about the use of algorithms? How do you feel about their power today?

    • There is a growing discussion taking place about regulating AI, especially in terms of consumer rights and data privacy. What are some recent policy developments around this topic? Why can regulating the industry be difficult?

    Go to top


    Chronology

     
    1920s–1950sEarly computer inventions and technological research spawn the field of artificial intelligence (AI).
    1920Czech writer Karel Čapek first uses the word “robot” in his play R.U.R.
    1936British mathematician Alan Turing invents the “Turing machine,” a blueprint for the first digital computer.
    1946The first general-purpose computer, ENIAC, is finished and works on calculations for the hydrogen bomb.
    1951Princeton graduate students Marvin Minsky and Dean Edmonds construct the first simple neural network machine, inspired by the human brain.
    1956Dartmouth College hosts a workshop to see if a machine can simulate human intelligence, coining the term “artificial intelligence.”
    1960s–1990sAI advances and spreads to various sectors.
    1966Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum publishes a report finding people attribute human-like feelings to ELIZA, an early computer program.
    1966–1972Stanford Research Institute creates Shakey, one of the first robots to incorporate AI.
    1972Stanford University researchers develop MYCIN, an AI program known as an “expert system;” it provides diagnostic advice to doctors about blood infections in humans.
    1983The Pentagon's Defense Advanced Research Projects Agency (DARPA) initiates the Strategic Computing Initiative, funding research in chip design and AI software over the next decade.
    1995The United States begins using GPS satellite data for navigation and precision targeting in unmanned drones.
    1997IBM's Deep Blue supercomputer beats world chess champion Garry Kasparov in a six-game match.
    1999Sony introduces AIBO, a canine robot — one of the first mass-market robots for entertainment.
    2000s-PresentAI and algorithms become ubiquitous in daily life.
    2001First widespread use of facial recognition software by police begins in Pinellas County, Fla.
    2009Google begins testing self-driving cars.
    2011IBM's supercomputer Watson wins a “Jeopardy!” TV show tournament against two human competitors…. Apple introduces the virtual assistant Siri with speech-recognition software on iPhones.
    2013Sales of industrial robots reach more than 178,000, double the sales in 1995.
    2014The United Nations begins convening experts to discuss concerns about emerging technologies in autonomous weapons, amid calls for a ban from human rights groups and 30 nations. The U.N. forum fails to reach an agreement over the next eight years as both Russia and the United States oppose a ban.
    2016A ProPublica investigation finds an AI algorithm used by courts to decide whether to jail defendants before trial wrongly labeled Black defendants as criminals at almost twice the rate of white defendants.
    2017A robot named Sophia is granted citizenship by Saudi Arabia, amid discussions about the rights of robots…. Global sales of industrial robots reach more than 380,000, double that of 2013.
    2018Europe's new data privacy law, the General Data Protection Regulation, takes effect, requiring companies to obtain consent before acquiring user data, among other requirements.
    2020OpenAI, a Silicon Valley technology company co-founded by Elon Musk, unveils the world's largest neural network to date — a language model called GPT-3, capable of writing poems and blogs and chatting with humans.
    2021Tesla introduces the humanoid robot Optimus…. The European Commission releases a draft EU Artificial Intelligence Act governing AI products and services…. Washington, D.C., becomes the first U.S. city to consider a bill to prevent algorithmic discrimination.
    2022Google engineer Blake Lemoine claims LaMDA chatbot is “sentient,” spurring controversy (June)…. President Biden signs the Chips and Science Act to encourage U.S. manufacturing of semiconductor chips (August)…. Tesla introduces a new version of its robot Optimus (September)…. The Biden administration releases a Blueprint for an AI Bill of Rights…. Biden restricts exports to China of advanced semiconductor chips…. The United States is one of 70 nations calling for limitations on autonomy in weapons systems (October)…. U.N. forum meets and schedules further talks in 2023 on banning autonomous weapons (November).
      

    Go to top

    Short Features

    Courts Turn to Algorithms for Pretrial Release Decisions

    Are they less biased than people?

    Brisha Borden, 18, was arrested in Fort Lauderdale, Fla., in 2014 after she and a friend tried to ride a kid's bike and scooter they had found on the street. They quickly abandoned the items, but a witness had already called the police.

    When Borden was booked into jail, a computer program churned out a score rating her as “high risk” for committing a future crime. By contrast, the program rated Vernon Prater, 41, picked up in the same county for shoplifting, as “low risk.”

    Borden is Black. Prater is white. Two years later, the nonprofit news site ProPublica reported, “The computer algorithm got it exactly backward.” By then, Borden had not been charged with any new crimes, while Prater was serving an eight-year term for stealing thousands of dollars of electronics from a warehouse.1

    In its analysis of risk scores assigned to more than 7,000 people in Broward County, Fla. — where Ft. Lauderdale is located — ProPublica determined that the computer formula was incorrectly flagging Black people as future criminals at almost twice the rate of white people. Overall, only 20 percent of those predicted by the algorithm to commit future violent crimes actually did.2

    Similar scoring systems, known as pretrial risk assessment tools, are used in two-thirds of U.S. counties, according to a 2019 survey by the Pretrial Justice Institute, a Baltimore-based nonprofit dedicated to pretrial reform. Such programs are used to decide whether a defendant should be jailed or released before trial and, in some counties, to set bail amounts.3

    Compliation screenshots of Jens Ludwig and Brandon Buskey (Courtesy Jens Ludwig; ACLU/Molly Kaplan)
    Jens Ludwig, left, an economist and professor at the University of Chicago, and Brandon Buskey, director of the ACLU Criminal Law Reform Project. (Courtesy Jens Ludwig; ACLU/Molly Kaplan)

    Critics say these algorithms — which are powered by artificial intelligence — perpetuate discrimination against Black people by drawing on historical data that can reflect past overpolicing of Black communities. For example, Black people are almost four times as likely to be arrested for marijuana possession as white people, even though they use the drug at comparable rates.4

    Advocates say a well-designed algorithm can reduce discrimination by weeding out biased questions and putting less weight on factors such as low-level marijuana infractions. “Most criminal justice systems inadvertently detain lots of low-risk people and release lots of high-risk ones because it's hard for judges to figure out who is high risk,” says Jens Ludwig, an economist and professor at the University of Chicago's Harris School of Public Policy and director of the university's Crime Lab.

    Typically, “the judge is doing something in their head which is not transparent and adding their own biases on top of that,” Ludwig says. An algorithm is “more transparent than the judge; it lets you make some adjustments to overcome some of these data biases.”

    For example, Ludwig helped design a new risk assessment tool for New York City, which recommended far more people be released pretrial than the city's previous scoring system. The old program advised release for 32 percent of Black defendants and 41 percent of white defendants. The new one recommended release for 83.9 percent of Black defendants and 83.5 percent of white defendants. Between late 2019, after the new tool was released, and early 2020, judges’ decisions appeared to reflect the computer-recommended change — releasing 69 percent of Black people and 72 percent of white people.5

    Vincent M. Southerland, an assistant professor of clinical law at New York University, who served on a research advisory council to help the city design the new mechanism, says New York's case is unique. Most jurisdictions use one of the many off-the-shelf options. They do not have the money or time to develop a custom-made tool as New York did, he says, to sort out which data points are tied to racial disparities.

    In addition, Southerland says, it is hard to know if the increase in releases was due to the new tool or to New York state's bail reform law, which in 2020 ended cash bail for most misdemeanors and nonviolent felonies. Cash bail, the system that allows some criminal defendants to be released from jail after putting up money to assure they appear in court, is frequently unfair to low-income people and people of color. “What's driving the change may be legislation, not the tool,” he says.6

    As for Ludwig's contention in a recent article that algorithms can be “a force for social justice,” Southerland is skeptical.7

    “When you look at pretrial assessments used nationwide, you may see reductions in the pretrial population incarcerated, but you do not see a change in racial disparities,” says Southerland. “No matter what the tool says, if the judge is uncomfortable with release because of biases, they'll make a decision aligned with their worldview.”

    That skepticism is shared by some civil rights groups and some state legislators. Seven states mandate the use of the algorithms, but Idaho in 2019 became the first state to require that defendants be entitled to review calculations used for their own risk score and for that data to be available to the public.8

    The American Civil Liberties Union (ACLU) opposes the use of algorithms for pretrial decisions. “They're simply not accurate enough as predictive tools; but they also bake in so much structural racism in how we enforce criminal laws against Black and brown communities,” says Brandon Buskey, director of the ACLU Criminal Law Reform Project. The goal of releasing more people fairly can often be achieved simply by reforming the bail system, as in the case of a recent ACLU settlement with the city of Detroit, he says.

    In a striking turnaround, the Pretrial Justice Institute, which had worked with court systems since the 1980s to adopt pretrial risk tools when they were still pencil-and-paper calculations, reversed its position in 2020, saying the algorithms “can no longer be a part of our solution for building equitable pretrial justice systems.”9

    After working with researchers to try to make the algorithms less biased, “what we realized,” says the institute's co-director Meghan Guevara, “is that there really is no way to remove the racial bias from the tools because all of those tools rely on criminal history data, and we know there's racial bias in the way communities are policed, bias in prosecution and in conviction rates.”

    — Sarah Glazer

    [1] Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

    Footnote1. Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.Go to Footnotes

    [2] Ibid.

    Footnote2. Ibid. Go to Footnotes

    [3] “Scan of Pretrial Practices,” Pretrial Justice Institute, 2019, p. 7, https://static1.squarespace.com/static/61d1eb9e51ae915258ce573f/t/61df3e19dc500a1e42344351/1642020381052/Scan+of+Pretrial+Practices.pdf.

    Footnote3. “Scan of Pretrial Practices,” Pretrial Justice Institute, 2019, p. 7, https://static1.squarespace.com/static/61d1eb9e51ae915258ce573f/t/61df3e19dc500a1e42344351/1642020381052/Scan+of+Pretrial+Practices.pdf.Go to Footnotes

    [4] Angwin, op. cit.; Tom Angell, “On 4/20, ACLU Highlights Racist Marijuana Enforcement In New Report,” Forbes, April 20, 2020, https://www.forbes.com/sites/tomangell/2020/04/20/on-420-aclu-highlights-racist-marijuana-enforcement-in-new-report/?sh=3b43fce87487.

    Footnote4. Angwin, op. cit.; Tom Angell, “On 4/20, ACLU Highlights Racist Marijuana Enforcement In New Report,” Forbes, April 20, 2020, https://www.forbes.com/sites/tomangell/2020/04/20/on-420-aclu-highlights-racist-marijuana-enforcement-in-new-report/?sh=3b43fce87487.Go to Footnotes

    [5] Jens Ludwig and Sendhil Mullainathan, “Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System,” American Economic Association, Journal of Economic Perspectives, Fall 2021, pp. 90–91, Table 4, https://www.aeaweb.org/articles?id=10.1257/jep.35.4.71.

    Footnote5. Jens Ludwig and Sendhil Mullainathan, “Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System,” American Economic Association, Journal of Economic Perspectives, Fall 2021, pp. 90–91, Table 4, https://www.aeaweb.org/articles?id=10.1257/jep.35.4.71.Go to Footnotes

    [6] “Ames Grawert and Noah Kim, “The Facts on Bail Reform and Crime Rates in New York State,” Brennan Center for Justice, March 22, 2022, https://www.brennancenter.org/our-work/research-reports/facts-bail-reform-and-crime-rates-new-york-state#:~:text=New%20York's%20bail%20reform%20legislation,outcome%20of%20a%20criminal%20case.

    Footnote6. “Ames Grawert and Noah Kim, “The Facts on Bail Reform and Crime Rates in New York State,” Brennan Center for Justice, March 22, 2022, https://www.brennancenter.org/our-work/research-reports/facts-bail-reform-and-crime-rates-new-york-state#:~:text=New%20York's%20bail%20reform%20legislation,outcome%20of%20a%20criminal%20case.Go to Footnotes

    [7] Ludwig and Mullainathan, op. cit.

    Footnote7. Ludwig and Mullainathan, op. cit. Go to Footnotes

    [8] “Pretrial Release: Risk Assessment Tools,” National Conference of State Legislatures, June 30, 2022, https://www.ncsl.org/research/civil-and-criminal-justice/pretrial-release-risk-assessment-tools.aspx; “Racial and Ethnic Disparities in the Justice System,” National Conference of State Legislatures, May 2022, p. 5, https://www.ncsl.org/Portals/1/Documents/cj/Racial-and-Ethnic-Disparities-in-the-Justice-System_v03.pdf.

    Footnote8. “Pretrial Release: Risk Assessment Tools,” National Conference of State Legislatures, June 30, 2022, https://www.ncsl.org/research/civil-and-criminal-justice/pretrial-release-risk-assessment-tools.aspx; “Racial and Ethnic Disparities in the Justice System,” National Conference of State Legislatures, May 2022, p. 5, https://www.ncsl.org/Portals/1/Documents/cj/Racial-and-Ethnic-Disparities-in-the-Justice-System_v03.pdf.Go to Footnotes

    [9] “Updated Position on Pretrial Risk Assessment Tools,” Pretrial Justice Institute, Feb. 7, 2020, https://static1.squarespace.com/static/61d1eb9e51ae915258ce573f/t/61df34bb945c52230a215be9/1642018002889/PJI+Statement+Against+Risk+Assessments.

    Footnote9. “Updated Position on Pretrial Risk Assessment Tools,” Pretrial Justice Institute, Feb. 7, 2020, https://static1.squarespace.com/static/61d1eb9e51ae915258ce573f/t/61df34bb945c52230a215be9/1642018002889/PJI+Statement+Against+Risk+Assessments.Go to Footnotes

    Go to top

    New AI Imaging Tools Generate Remarkable Art

    Will they put artists out of work?

    “Meryl Streep in cubist style.”

    Within seconds of typing that prompt, images appear like magic on a computer screen: A semi-abstract oil painting of the actress along with a standard cubist portrait — maybe not Picasso, but certainly reminiscent of the painter's famous style.

    The image is created by one of several artificial intelligence (AI) image generators unveiled in recent months, garnering great acclaim for their surprisingly fast, cheap and often aesthetically pleasing and playful results. But the tools also have raised concerns about whether they use artists’ work without permission — and whether they could end up putting graphic artists out of a job.

    The new technique poses “implications as big as the invention of the camera — or perhaps the creation of visual art itself,” wrote Benj Edwards, a reporter for Ars Technica, a technology website, shortly after the San Francisco and London-based company Stability AI released its open-source image tool Stable Diffusion in August.10

    Image of actress Meryl Streep created with Stable Diffusion image tool. (Courtesy Sarah Glazer)
    An image of actress Meryl Streep created by the AI image tool Stable Diffusion. (Courtesy Sarah Glazer)

    On Sept. 28, Silicon Valley company OpenAI announced that its image-generation tool, DALL-E, previously limited to certain professionals, would be open to the public. The company reported that 1.5 million users, from artists to authors to architects, were already creating more than 2 million images per day with it.11

    People with privileged access generated seemingly infinite combinations from their prompts, ranging from astronauts on horseback to images of sculptures mimicking the style of famous artists.12

    Both DALL-E and Stable Diffusion have been trained using giant data sets that scrape millions of images from the internet. Stable Diffusion has been trained on a data set known as LAION-5B, which harvests some 5 billion images, and as a result, “has absorbed the styles of many living artists,” Edwards reported.13

    Some artists are not happy about that. “I am not okay with my artwork being included” in these databases, wrote artist Glendon Mellow on Twitter. “It's not ethical. These are powerful tools being built recklessly.”14

    In response to this controversy, a group of artists launched Have I Been Trained, a website that allows artists to search whether their images have been scanned and used to train these AI systems. (Harvesting images from public websites is apparently legal. In 2019, a federal appeals court in San Francisco ruled that any publicly available data on the internet that is not copyrighted is available for web scraping.)15

    Using that website, one woman named Lapine, an AI artist based in California, found a medical image of herself taken about 10 years ago that she had given consent only for her doctor to use, according to Ars Technica. Others worry about the use of inappropriate or disturbing content within such data sets, such as violent or pornographic imagery, reports VICE's Motherboard, an online magazine.16

    In its content policy for DALL-E, OpenAI prohibits images that involve sexual content, violence, illegal activity and deception. By contrast, Stable Diffusion has fewer guardrails. For example, it does not prohibit making images of public figures.17

    “Stable Diffusion's lack of safeguards,” compared to other systems such as DALL-E, “poses tricky ethical questions for the AI community,” wrote AI reporter Kyle Wiggers of TechCrunch. “Making fake images of public figures opens a large can of worms. And making the raw components of the system freely available leaves the door open to bad actors who could train them on subjectively inappropriate content, like pornography and graphic violence.”18

    Stability AI CEO and founder Emad Mostaque said his company is betting on an “open” ecosystem, telling TechCrunch: “[I]t is our belief this technology will be prevalent, and the paternalistic and somewhat condescending attitude of many AI aficionados is misguided in not trusting society.”19

    For graphic artists, the new technology poses other challenges. Los Angeles-based digital artist Don Allen Stevenson III told the Financial Times, “Artists have to get themselves into a position where they can change and adapt or else they're going to go extinct.”20

    But others do not feel threatened by these systems. Erik Carter, a graphic designer in New York City who has published illustrations in The New York Times, says, “I don't think it will replace illustrators anytime soon.” He finds images produced by DALL-E are often cropped off-center, blurry or with inadequate resolution.

    At the same time, Carter says DALL-E has “made my work a lot easier.” He uses it to generate some of the sketches he sends to art directors at publications before redrawing the final illustration from the option they select.

    Designers interviewed by The New York Times also reported the new tools were saving time at the front end of their creative process.21

    In his blog, Carter wrote that he does not fear “artistic robot overlords” with all their new illustration tools. “I remain perhaps naively hopeful of the possibility of freeing illustrators from making yet another low-budget piece and instead allowing them to make the kind of work they want to make.”22

    — Sarah Glazer

    [10] Benj Edwards, “With Stable Diffusion, you may never believe what you see online again,” Ars Technica, Sept. 6, 2022, https://arstechnica.com/information-technology/2022/09/with-stable-diffusion-you-may-never-believe-what-you-see-online-again/.

    Footnote10. Benj Edwards, “With Stable Diffusion, you may never believe what you see online again,” Ars Technica, Sept. 6, 2022, https://arstechnica.com/information-technology/2022/09/with-stable-diffusion-you-may-never-believe-what-you-see-online-again/.Go to Footnotes

    [11] “DALL-E Now Available Without Waitlist,” OpenAI, Sept. 28, 2022, https://openai.com/blog/dall-e-now-available-without-waitlist/.

    Footnote11. “DALL-E Now Available Without Waitlist,” OpenAI, Sept. 28, 2022, https://openai.com/blog/dall-e-now-available-without-waitlist/.Go to Footnotes

    [12] Benj Edwards, “DALL-E image generator is now open to everyone,” Ars Technica, Sept. 28, 2022, https://arstechnica.com/information-technology/2022/09/openai-image-generator-dall-e-now-available-without-waitlist/.

    Footnote12. Benj Edwards, “DALL-E image generator is now open to everyone,” Ars Technica, Sept. 28, 2022, https://arstechnica.com/information-technology/2022/09/openai-image-generator-dall-e-now-available-without-waitlist/.Go to Footnotes

    [13] Edwards, op. cit.

    Footnote13. Edwards, op. cit. Go to Footnotes

    [14] Glendon Mellow, Twitter post, Aug. 30, 2022, https://twitter.com/FlyingTrilobite/status/1564760472318001152.

    Footnote14. Glendon Mellow, Twitter post, Aug. 30, 2022, https://twitter.com/FlyingTrilobite/status/1564760472318001152.Go to Footnotes

    [15] Benj Edwards, “Have AI image generators assimilated your art?” Ars Technica, Sept. 15, 2022, https://arstechnica.com/information-technology/2022/09/have-ai-image-generators-assimilated-your-art-new-tool-lets-you-check/; “US court fully legalized website scraping and technically prohibited it,” Parsers VC, Jan. 28, 2020, https://parsers.me/us-court-fully-legalized-website-scraping-and-technically-prohibited-it/.

    Footnote15. Benj Edwards, “Have AI image generators assimilated your art?” Ars Technica, Sept. 15, 2022, https://arstechnica.com/information-technology/2022/09/have-ai-image-generators-assimilated-your-art-new-tool-lets-you-check/; “US court fully legalized website scraping and technically prohibited it,” Parsers VC, Jan. 28, 2020, https://parsers.me/us-court-fully-legalized-website-scraping-and-technically-prohibited-it/.Go to Footnotes

    [16] Chloe Xiang, “AI Is Probably Using Your Images and It's Not Easy to Opt Out,” VICE, Motherboard, Sept. 26, 2022, https://www.vice.com/en/article/3ad58k/ai-is-probably-using-your-images-and-its-not-easy-to-opt-out; Benj Edwards, “Artist finds private medical record photos in popular AI training data set,” Ars Technica, Sept. 21, 2022, https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/.

    Footnote16. Chloe Xiang, “AI Is Probably Using Your Images and It's Not Easy to Opt Out,” VICE, Motherboard, Sept. 26, 2022, https://www.vice.com/en/article/3ad58k/ai-is-probably-using-your-images-and-its-not-easy-to-opt-out; Benj Edwards, “Artist finds private medical record photos in popular AI training data set,” Ars Technica, Sept. 21, 2022, https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/.Go to Footnotes

    [17] “Content policy,” OpenAI, Sept. 19, 2022, https://labs.openai.com/policies/content-policy.

    Footnote17. “Content policy,” OpenAI, Sept. 19, 2022, https://labs.openai.com/policies/content-policy.Go to Footnotes

    [18] Kyle Wiggers, “This startup is setting DALL-E2-like AI free, consequences be damned,” TechCrunch, Aug. 12, 2022, https://techcrunch.com/2022/08/12/a-startup-wants-to-democratize-the-tech-behind-dall-e-2-consequences-be-damned/.

    Footnote18. Kyle Wiggers, “This startup is setting DALL-E2-like AI free, consequences be damned,” TechCrunch, Aug. 12, 2022, https://techcrunch.com/2022/08/12/a-startup-wants-to-democratize-the-tech-behind-dall-e-2-consequences-be-damned/.Go to Footnotes

    [19] Ibid.

    Footnote19. Ibid. Go to Footnotes

    [20] Tom Faber, “The golden age of AI-generated art is here. It's going to get weird,” Financial Times, Oct. 26, 2022, https://www.ft.com/content/073ea888-20d7-437c-8226-a2dd9f276de4.

    Footnote20. Tom Faber, “The golden age of AI-generated art is here. It's going to get weird,” Financial Times, Oct. 26, 2022, https://www.ft.com/content/073ea888-20d7-437c-8226-a2dd9f276de4.Go to Footnotes

    [21] Kevin Roose, “AI-Generated Art Is Already Transforming Creative Work,” The New York Times, Oct. 21, 2022, https://www.nytimes.com/2022/10/21/technology/ai-generated-art-jobs-dall-e-2.html.

    Footnote21. Kevin Roose, “AI-Generated Art Is Already Transforming Creative Work,” The New York Times, Oct. 21, 2022, https://www.nytimes.com/2022/10/21/technology/ai-generated-art-jobs-dall-e-2.html.Go to Footnotes

    [22] Erik Carter, “The DALL-E are Coming,” Design Harder, June 30, 2022, https://designharder.substack.com/p/the-dall-e-are-coming.

    Footnote22. Erik Carter, “The DALL-E are Coming,” Design Harder, June 30, 2022, https://designharder.substack.com/p/the-dall-e-are-coming.Go to Footnotes

    Go to top

    Bibliography

    Books

    Chalmers, David J., Reality+: Virtual Worlds and the Problems of Philosophy, W.W. Norton & Co., 2022. A New York University professor of philosophy and neural science argues that “simulated beings will be as conscious” as human beings and that life in virtual worlds can be as good and meaningful as the real world.

    Hannas, William C., and Huey-Meei Chang, eds., Chinese Power and Artificial Intelligence: Perspectives and Challenges, Routledge, 2022. Experts discuss the impact of AI in China in such diverse areas as neuroscience, quantum science and military applications.

    Marks, Robert J., Non-Computable You: What You Do That Artificial Intelligence Never Will, Discovery Institute Press, 2022. A professor of electrical and computer engineering at Baylor University traces the history of AI, arguing that certain traits, such as creativity, are uniquely human.

    Scharre, Paul, Army of None: Autonomous Weapons and the Future of War, W.W. Norton & Co., 2018. Former U.S. Army Ranger Scharre, now at the Center for a New American Security, discusses the moral questions raised by autonomous weapons with no human involvement.

    Walsh, Toby, Machines Behaving Badly: The Morality of AI, La Trobe University Press, 2022. A professor of AI at the University of New South Wales in Australia argues that lethal autonomous weapons should be banned.

    Articles

    Johnson, Khari, “Biden's AI Bill of Rights Is Toothless Against Big Tech,” Wired, Oct. 4, 2022, https://tinyurl.com/5n8bsum6. A technology reporter argues that the recently released White House Blueprint for an AI Bill of Rights does not have any legally binding requirements and, thus, has no power over the large technology companies that have the most influence in shaping AI technology.

    Swanson, Ana, “The Biden administration is weighing further controls on Chinese technology,” The New York Times, Oct. 27, 2022, https://tinyurl.com/er9ukrp9. The Biden administration, which recently imposed new export controls on semiconductors to China, is considering further restrictions on technology exports to China.

    Tiku, Nitasha, “The Google engineer who thinks the company's AI has come to life,” The Washington Post, June 11, 2022, https://tinyurl.com/pyh8yj5p. Google engineer Blake LeMoine explains why he believes Google's language model, LaMDA, is “sentient.”

    Wadhwa, Vivek, and Mauritz Kop, “Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence,” Foreign Policy, Aug. 21, 2022, https://tinyurl.com/4x3799v9. A columnist (Wadhwa) and a Stanford University visiting scholar (Kop) warn that the emerging technology of quantum computing could be far more powerful than AI, with the ability to break cryptography that is critical to national defense.

    Wolff, Josephine, “The EU's New Proposed Rules on A.I. Are Missing Something,” Slate, April 30, 2021, https://tinyurl.com/47afvvxd. An associate professor of cybersecurity at Tufts University finds the proposed rules for regulating AI in the European Union complex and not as clear as they should be.

    Reports and Studies

    “Mid-Decade Challenges to National Competitiveness,” Special Competitive Studies Project, September 2022, https://tinyurl.com/a6nuc529. Former Google CEO Eric Schmidt's nonprofit research group warns that China is poised to overtake the United States in the global race for AI dominance, with dire implications for national security.

    “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control,” Human Rights Watch, Aug. 10, 2020, https://tinyurl.com/3xb4snkv. An advocacy group summarizes the positions of individual countries on the proposed ban of autonomous AI-based weapons.

    Green, Ben, “The flaws of policies requiring human oversight of government algorithms,” Computer Law & Security Review, July 2022, https://tinyurl.com/ycxwjv87. An assistant professor of public policy at the University of Michigan analyzes proposed policies around the world that put a human in control to protect against algorithms’ mistaken results, and argues this measure provides “a false sense of security.”

    Southerland, Vincent M., “The Intersection of Race and Algorithmic Tools in the Criminal Legal System,” Maryland Law Review, 2021, https://tinyurl.com/ye2386vm. An assistant professor of clinical law at New York University argues there is little evidence that the algorithmic tools used to decide if defendants should be jailed will reduce racial disparities.

    Go to top

    The Next Step

    AI & Consumer Rights/Privacy

    “‘AI will be used to fight scammers,’ says Consumer Protection Secretary,” The Brussels Times, Nov. 6, 2022, https://tinyurl.com/mrxfaru4. Belgium's federal government is cracking down on online scammers, allocating €1 million and leveraging artificial intelligence (AI) tools to combat such cybercrimes.

    Gooding, Piers, and Timothy Kariotis, “Mental Health Apps Are Not Keeping Your Data Safe,” Scientific American, Nov. 15, 2022, https://tinyurl.com/mt67mf4m. Developers of mental health apps and chatbots that use AI-based algorithms rarely address the ethical, privacy and political concerns about how they might be used.

    Karalis, Peter, “Analysis: As AI Meets Privacy, States’ Answers Raise Questions,” Bloomberg Law, Nov. 13, 2022, https://tinyurl.com/2p8ur8tw. In 2023, companies doing business in California, Colorado, Connecticut and Virginia must comply with the states' consumer privacy laws regulating AI-powered data processing.

    China & AI

    Dobberstein, Laura, “Chinese employers sought a million hard core AI techies in five years,” The Register, Nov. 8, 2022, https://tinyurl.com/5n8bb2aa. In recent years, Chinese employers have advertised for nearly a million jobs in the country's AI industry.

    Edgerton, Anna , et al., “US Eyes Expanding China Tech Ban to Quantum Computing and AI,” Bloomberg, Oct. 20, 2022, https://tinyurl.com/35vwstz6. After imposing restrictions on China's semiconductor systems, the Biden administration is now exploring limiting China's access to quantum computing and AI.

    Kaye, Kate, “Why an ‘us vs. them’ approach to China lets the US avoid hard AI questions,” Protocol, Nov. 8, 2022, https://tinyurl.com/2p8z7cjw. Some human rights and industry watchdogs worry that the United States’ focus on beating China in the race for AI dominance blinds the nation to possible harms the technology can pose to society.

    Discrimination/Bias

    “Gender Bias in Search Algorithms Has Effect on Users, New Study Finds,” news release, New York University, July 12, 2022, https://tinyurl.com/yf3xfyv2. Psychology researchers at New York University have published a study showing that internet searches can negatively affect users by promoting gender bias and influencing hiring decisions.

    Heikkilä, Melissa, “A bias bounty for AI will help to catch unfair algorithms faster,” MIT Technology Review, Oct. 20, 2022, https://tinyurl.com/bdcw722m. A group of AI and machine-learning experts are launching a new “bias bounty” competition, which asks participants to create tools to identify and mitigate AI algorithmic biases that can cause innocent people to be arrested or deny them housing, jobs and basic services.

    Mello-Klein, Cody, “Facebook's Ad Delivery Algorithm Is Discriminating Based on Race, Gender and Age in Photos, Northeastern Researchers Find,” News@Northeastern, Oct. 25, 2022, https://tinyurl.com/2jstp296. Researchers at Northeastern University's Khoury College of Computer Sciences found that Facebook's algorithm delivers advertisements to demographic groups based on who is pictured in the ad.

    Rose, Janus, “This Tool Lets Anyone See the Bias in AI Image Generators,” Motherboard, Vice, Nov. 3, 2022, https://tinyurl.com/ye23n8cp. The Stable Diffusion Bias Explorer allows users to combine different descriptive terms to see how AI software uses certain word combinations to produce racial and gender stereotypes.

    Lethal Autonomous Weapons

    “Pakistan pushes for talks to regulate autonomous weapons systems,” Associated Press of Pakistan, Daily Times, Oct. 26, 2022, https://tinyurl.com/yfnex7jj. Pakistan has called on a “handful of states” to drop their opposition to holding talks on a legally binding treaty to prohibit lethal autonomous weapons systems, which Pakistan ambassador to the United Nations Khalil Hashmi calls a defining concern in international arms control.

    Bodkin, Henry, and Aisling O'Leary, “Microdrones: the AI assassins set to become weapons of mass destruction,” The Telegraph, Nov. 14, 2022, https://tinyurl.com/5fxnb6zy. The U.K. is sending Ukraine 850 Black Hornet “microdrones” — so-called killer robots measuring about six inches and weighing a little less than a plum — which can peer around corners and sneak through windows to help in the war against Russia.

    Conn, Ariel, “How Can We Talk About Autonomous Weapons?” IEEE Spectrum, Nov. 3, 2022, https://tinyurl.com/yckd7hmm. A group of experts convened by the Institute of Electrical and Electronics Engineers Standards Association is developing guidelines and common definitions of terms needed to discuss the complicated implications of autonomous weapons.

    Go to top

    Contacts

    Center for a New American Security (CNAS)
    1152 15th St., N.W., Suite 950, Washington, DC 20005
    202-457-9400
    cnas.org
    An independent, bipartisan think tank that specializes in national security and defense policies.

    Center for Democracy & Technology
    1401 K St., N.W., Suite 200, Washington, DC 20005
    202-637-9800
    cdt.org
    A nonpartisan nonprofit that advocates for stronger civil rights protections in the digital age.

    Center for Effective Public Policy
    10605 Concord St., Kensington, MD 20895
    301-589-9383
    cepp.com
    A national nonprofit that works with local, state and tribal criminal legal systems to improve justice and advance community well-being.

    Center for Security and Emerging Technology
    Georgetown University, ICC 301, 37th St., N.W., Washington, DC 20057
    202-687-5696
    cset.georgetown.edu
    A policy research organization within Georgetown University's Walsh School of Foreign Service that produces data-driven research at the intersection of security and technology.

    Center for Strategic and International Studies
    1616 Rhode Island Ave., N.W., Washington, DC 20036
    202-887-0200
    csis.org
    A bipartisan research nonprofit whose purpose is to define the future of national security.

    Human Rights Watch
    350 Fifth Ave., 34th Floor, New York, NY 10118-3299
    212-290-4700
    hrw.org
    An international nongovernmental organization that conducts research and advocacy on human rights.

    Pretrial Justice Institute
    200 East Pratt St., Suite 4100, Baltimore, MD 21202
    667-281-9141
    pretrial.org
    A national nonprofit dedicated to pretrial system reform and ending mass incarceration.

    Special Competitive Studies Project
    Arlington, VA
    scsp.ai
    A bipartisan, nonprofit initiative that recommends ways to strengthen the United States’ long-term competitiveness in artificial intelligence and other areas.

    Go to top

    Footnotes

    [1] Blake Lemoine, “Is LaMDA Sentient?” Medium, June 11, 2022, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917; lemoine@ & collaborator, “Is LaMDA Sentient? — an Interview,” https://s3.documentcloud.org/documents/22058315/is-lamda-sentient-an-interview.pdf.

    Footnote1. Blake Lemoine, “Is LaMDA Sentient?” Medium, June 11, 2022, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917; lemoine@ & collaborator, “Is LaMDA Sentient? — an Interview,” https://s3.documentcloud.org/documents/22058315/is-lamda-sentient-an-interview.pdf.Go to Footnotes

    [2] Nitasha Tiku, “The Google engineer who thinks the company's AI has come to life,” The Washington Post, June 11, 2022, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.

    Footnote2. Nitasha Tiku, “The Google engineer who thinks the company's AI has come to life,” The Washington Post, June 11, 2022, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.Go to Footnotes

    [3] Nitasha Tiku, “Google fired engineer who said its AI was sentient,” The Washington Post, July 22, 2022, https://www.washingtonpost.com/technology/2022/07/22/google-ai-lamda-blake-lemoine-fired/.

    Footnote3. Nitasha Tiku, “Google fired engineer who said its AI was sentient,” The Washington Post, July 22, 2022, https://www.washingtonpost.com/technology/2022/07/22/google-ai-lamda-blake-lemoine-fired/.Go to Footnotes

    [4] L. Marino, “Sentience,” Science Direct, Encyclopedia of Animal Behavior, 2010, https://www.sciencedirect.com/topics/neuroscience/sentience; David J. Chalmers, Reality + (2022), p. 277; David Chalmers, “Are Large Language Models Sentient?” YouTube, Oct. 13, 2022, https://www.youtube.com/watch?v=-BcuCmf00_Y.

    Footnote4. L. Marino, “Sentience,” Science Direct, Encyclopedia of Animal Behavior, 2010, https://www.sciencedirect.com/topics/neuroscience/sentience; David J. Chalmers, Reality + (2022), p. 277; David Chalmers, “Are Large Language Models Sentient?” YouTube, Oct. 13, 2022, https://www.youtube.com/watch?v=-BcuCmf00_Y.Go to Footnotes

    [5] Vincent M. Southerland, “The Intersection of Race and Algorithmic Tools in the Criminal Legal System,” Maryland Law Review, 2021, https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=3888&context=mlr; Ben Green and Amba Kak, “The False Comfort of Human Oversight as an Antidote to A.I. Harm,” Slate, June 15, 2021, https://slate.com/technology/2021/06/human-oversight-artificial-intelligence-laws.html.

    Footnote5. Vincent M. Southerland, “The Intersection of Race and Algorithmic Tools in the Criminal Legal System,” Maryland Law Review, 2021, https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=3888&context=mlr; Ben Green and Amba Kak, “The False Comfort of Human Oversight as an Antidote to A.I. Harm,” Slate, June 15, 2021, https://slate.com/technology/2021/06/human-oversight-artificial-intelligence-laws.html.Go to Footnotes

    [6] Abubakar Abid et al., “Large language models associate Muslims with violence,” Nature Machine Intelligence, June 2021, https://www.nature.com/articles/s42256-021-00359-2.epdf.

    Footnote6. Abubakar Abid et al., “Large language models associate Muslims with violence,” Nature Machine Intelligence, June 2021, https://www.nature.com/articles/s42256-021-00359-2.epdf.Go to Footnotes

    [7] Southerland, op. cit.; Green and Kak, op. cit.

    Footnote7. Southerland, op. cit.; Green and Kak, op. cit. Go to Footnotes

    [8] Britney Muller, “Machine Learning Experts — Margaret Mitchell,” Hugging Face, March 23, 2022, https://huggingface.co/blog/meg-mitchell-interview; “Machine Learning,” IBM, July 15, 2020, https://www.ibm.com/cloud/learn/machine-learning.

    Footnote8. Britney Muller, “Machine Learning Experts — Margaret Mitchell,” Hugging Face, March 23, 2022, https://huggingface.co/blog/meg-mitchell-interview; “Machine Learning,” IBM, July 15, 2020, https://www.ibm.com/cloud/learn/machine-learning.Go to Footnotes

    [9] “DC Chamber of Commerce Provides Testimony to the DC Council Committee on Government Operations for ‘Stop Discrimination by Algorithms Act of 2021,’” DC Chamber of Commerce, Sept. 22, 2022, https://dcchamber.org/wp-content/uploads/2022/09/Testimony-to-the-DC-Council-Committee-on-Government-Operations-for-Stop-Discrimination-by-Algorithms-Act-of-2021-Bill-24-558.pdf.

    Footnote9. “DC Chamber of Commerce Provides Testimony to the DC Council Committee on Government Operations for ‘Stop Discrimination by Algorithms Act of 2021,’” DC Chamber of Commerce, Sept. 22, 2022, https://dcchamber.org/wp-content/uploads/2022/09/Testimony-to-the-DC-Council-Committee-on-Government-Operations-for-Stop-Discrimination-by-Algorithms-Act-of-2021-Bill-24-558.pdf.Go to Footnotes

    [10] “Blueprint for an AI Bill of Rights,” The White House, October 2022, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf; Cristiano Lima, “White House unveils ‘AI bill of rights’ as ‘call to action’ to rein in tool,” The Washington Post, Oct. 4, 2022, https://www.washingtonpost.com/politics/2022/10/04/white-house-unveils-ai-bill-rights-call-action-rein-tool/.

    Footnote10. “Blueprint for an AI Bill of Rights,” The White House, October 2022, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf; Cristiano Lima, “White House unveils ‘AI bill of rights’ as ‘call to action’ to rein in tool,” The Washington Post, Oct. 4, 2022, https://www.washingtonpost.com/politics/2022/10/04/white-house-unveils-ai-bill-rights-call-action-rein-tool/.Go to Footnotes

    [11] “Proposal for a Regulation laying down harmonised rules on artificial intelligence,” European Commission, April 21, 2021, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.

    Footnote11. “Proposal for a Regulation laying down harmonised rules on artificial intelligence,” European Commission, April 21, 2021, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.Go to Footnotes

    [12] “Mid-Decade Challenges to National Competitiveness,” Special Competitive Studies Project, September 2022, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.

    Footnote12. “Mid-Decade Challenges to National Competitiveness,” Special Competitive Studies Project, September 2022, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.Go to Footnotes

    [13] “Stopping Killer Robots: Country Positions on Banning Autonomous Weapons and Retaining Human Controls,” Human Rights Watch, Aug. 10, 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.

    Footnote13. “Stopping Killer Robots: Country Positions on Banning Autonomous Weapons and Retaining Human Controls,” Human Rights Watch, Aug. 10, 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.Go to Footnotes

    [14] “Autonomous weapons that kill must be banned, insists UN chief,” UN News, March 25, 2019, https://news.un.org/en/story/2019/03/1035381.

    Footnote14. “Autonomous weapons that kill must be banned, insists UN chief,” UN News, March 25, 2019, https://news.un.org/en/story/2019/03/1035381.Go to Footnotes

    [15] “Negotiating a Treaty on Autonomous Weapons Systems: The Way Forward,” Stop Killer Robots, 2022, https://www.stopkillerrobots.org/wp-content/uploads/2022/06/Stop-Killer-Robots-Negotiating-a-Treaty-on-Autonomous-Weapons-Systems-The-Way-Forward.pdf.

    Footnote15. “Negotiating a Treaty on Autonomous Weapons Systems: The Way Forward,” Stop Killer Robots, 2022, https://www.stopkillerrobots.org/wp-content/uploads/2022/06/Stop-Killer-Robots-Negotiating-a-Treaty-on-Autonomous-Weapons-Systems-The-Way-Forward.pdf.Go to Footnotes

    [16] Austin Distel, “Why a GPT-3 Content Generator is a Must-Have for Marketing,” Jasper, May 9, 2022, https://www.jasper.ai/blog/gpt-3-content-generator; Kyle Wiggers, “AI Weekly: Novel architectures could make large language models more scalable,” Venture Beat, Dec. 17, 2021, https://venturebeat.com/uncategorized/ai-weekly-novel-architectures-could-make-large-language-models-more-scalable/; and Daria Zabój, “Key Chatbot Statistics You Should Follow in 2022,” Chatbot, July 12, 2022, https://www.chatbot.com/blog/chatbot-statistics/.

    Footnote16. Austin Distel, “Why a GPT-3 Content Generator is a Must-Have for Marketing,” Jasper, May 9, 2022, https://www.jasper.ai/blog/gpt-3-content-generator; Kyle Wiggers, “AI Weekly: Novel architectures could make large language models more scalable,” Venture Beat, Dec. 17, 2021, https://venturebeat.com/uncategorized/ai-weekly-novel-architectures-could-make-large-language-models-more-scalable/; and Daria Zabój, “Key Chatbot Statistics You Should Follow in 2022,” Chatbot, July 12, 2022, https://www.chatbot.com/blog/chatbot-statistics/.Go to Footnotes

    [17] Yann LeCun, Facebook post, May 17, 2022, https://m.alpha.facebook.com/story.php?story_fbid=10158256523332143&id=722677142.

    Footnote17. Yann LeCun, Facebook post, May 17, 2022, https://m.alpha.facebook.com/story.php?story_fbid=10158256523332143&id=722677142.Go to Footnotes

    [18] “Let Jasper Write Your Marketing Copy For Free,” Jasper, 2022, https://www.jasper.ai/free-trial.

    Footnote18. “Let Jasper Write Your Marketing Copy For Free,” Jasper, 2022, https://www.jasper.ai/free-trial.Go to Footnotes

    [19] Brian Heater, “Alexa will soon be able to read stories as your dead grandma,” TechCrunch, June 22, 2022, https://techcrunch.com/2022/06/22/alexa-will-soon-be-able-to-read-stories-as-your-dead-grandma/.

    Footnote19. Brian Heater, “Alexa will soon be able to read stories as your dead grandma,” TechCrunch, June 22, 2022, https://techcrunch.com/2022/06/22/alexa-will-soon-be-able-to-read-stories-as-your-dead-grandma/.Go to Footnotes

    [20] “HAL 9000,” Robot Hall of Fame, 2003, http://www.robothalloffame.org/inductees/03inductees/hal.html.

    Footnote20. “HAL 9000,” Robot Hall of Fame, 2003, http://www.robothalloffame.org/inductees/03inductees/hal.html.Go to Footnotes

    [21] “Are Large Language Models Sentient?” op. cit.

    Footnote21. “Are Large Language Models Sentient?” op. cit. Go to Footnotes

    [22] Gary Marcus, “Deep Fakes versus Deep Understanding,” The Road to AI We Can Trust, Oct. 2, 2022, footnote 1, https://garymarcus.substack.com/p/deepfakes-versus-deep-understanding.

    Footnote22. Gary Marcus, “Deep Fakes versus Deep Understanding,” The Road to AI We Can Trust, Oct. 2, 2022, footnote 1, https://garymarcus.substack.com/p/deepfakes-versus-deep-understanding.Go to Footnotes

    [23] David J. Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (2022), pp. xiii-xiv.

    Footnote23. David J. Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (2022), pp. xiii-xiv.Go to Footnotes

    [24] John Horgan, “David Chalmers Thinks the Hard Problem Is Really Hard,” Scientific American, April 10, 2017, https://blogs.scientificamerican.com/cross-check/david-chalmers-thinks-the-hard-problem-is-really-hard/. [Note to Factchecker: If behind paywall: https://archive.ph/o2rUw]

    Footnote24. John Horgan, “David Chalmers Thinks the Hard Problem Is Really Hard,” Scientific American, April 10, 2017, https://blogs.scientificamerican.com/cross-check/david-chalmers-thinks-the-hard-problem-is-really-hard/. [Note to Factchecker: If behind paywall: https://archive.ph/o2rUw]Go to Footnotes

    [25] Paul Scharre, Army of None: Autonomous Weapons and the Future of War (2018), pp. 2-3.

    Footnote25. Paul Scharre, Army of None: Autonomous Weapons and the Future of War (2018), pp. 2-3.Go to Footnotes

    [26] “Peter Maurer: ‘We must decide what role we want human beings to play in life-and-death decisions during armed conflicts,’” International Committee of the Red Cross, May 12, 2021, https://www.icrc.org/en/document/peter-maurer-role-autonomous-weapons-armed-conflict.

    Footnote26. “Peter Maurer: ‘We must decide what role we want human beings to play in life-and-death decisions during armed conflicts,’” International Committee of the Red Cross, May 12, 2021, https://www.icrc.org/en/document/peter-maurer-role-autonomous-weapons-armed-conflict.Go to Footnotes

    [27] “What are the dangers of autonomous weapons?” International Committee of the Red Cross, YouTube video, Dec. 1, 2021, https://www.youtube.com/watch?v=8GwBTFRFlzA.

    Footnote27. “What are the dangers of autonomous weapons?” International Committee of the Red Cross, YouTube video, Dec. 1, 2021, https://www.youtube.com/watch?v=8GwBTFRFlzA.Go to Footnotes

    [28] Vivek Wadhwa and Alex Salkever, “Foreign Policy: Killer Flying Robots are Here. What Do We Do Now?” Vivek Wadhwa, July 5, 2021, https://wadhwa.com/articles-list/2021/7/5/foreign-policy-killer-flying-robots-are-here-what-dowe-do-now.

    Footnote28. Vivek Wadhwa and Alex Salkever, “Foreign Policy: Killer Flying Robots are Here. What Do We Do Now?” Vivek Wadhwa, July 5, 2021, https://wadhwa.com/articles-list/2021/7/5/foreign-policy-killer-flying-robots-are-here-what-dowe-do-now.Go to Footnotes

    [29] Ibid.

    Footnote29. Ibid. Go to Footnotes

    [30] Toby Walsh, Machines Behaving Badly (2022), p. 106.

    Footnote30. Toby Walsh, Machines Behaving Badly (2022), p. 106.Go to Footnotes

    [31] “Stopping Killer Robots,” op. cit.; “Negotiating a Treaty on Autonomous Weapons Systems,” op. cit.

    Footnote31. “Stopping Killer Robots,” op. cit.; “Negotiating a Treaty on Autonomous Weapons Systems,” op. cit. Go to Footnotes

    [32] “Stopping Killer Robots,” op. cit.

    Footnote32. “Stopping Killer Robots,” op. cit. Go to Footnotes

    [33] “US rejects call for regulating or banning ‘killer robots,’” The Guardian, Dec. 2, 2021, https://www.theguardian.com/us-news/2021/dec/02/us-rejects-calls-regulating-banning-killer-robots; “Stopping Killer Robots,” op. cit.

    Footnote33. “US rejects call for regulating or banning ‘killer robots,’” The Guardian, Dec. 2, 2021, https://www.theguardian.com/us-news/2021/dec/02/us-rejects-calls-regulating-banning-killer-robots; “Stopping Killer Robots,” op. cit. Go to Footnotes

    [34] Carole Landry, “Iranian Drones Strike Kyiv,” The New York Times, Oct. 17, 2022, https://www.nytimes.com/2022/10/17/briefing/russia-ukraine-war-iran-drones.html; Paul Adams and Merlyn Thomas, “Ukraine war: Russia dive-bombs Kyiv with ‘kamikaze’ drones,” BBC News, Oct. 17, 2022, https://www.bbc.com/news/uk-63280523.

    Footnote34. Carole Landry, “Iranian Drones Strike Kyiv,” The New York Times, Oct. 17, 2022, https://www.nytimes.com/2022/10/17/briefing/russia-ukraine-war-iran-drones.html; Paul Adams and Merlyn Thomas, “Ukraine war: Russia dive-bombs Kyiv with ‘kamikaze’ drones,” BBC News, Oct. 17, 2022, https://www.bbc.com/news/uk-63280523.Go to Footnotes

    [35] Matthew Gault, “Robot Dog Not So Cute with Submachine Gun Strapped to its Back,” VICE, July 20, 2022, https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back; “An Open Letter to the Robotics Industry and our Communities, General Purpose Robots Should Not Be Weaponized” Boston Dynamics, 2022, https://www.bostondynamics.com/open-letter-opposing-weaponization-general-purpose-robots.

    Footnote35. Matthew Gault, “Robot Dog Not So Cute with Submachine Gun Strapped to its Back,” VICE, July 20, 2022, https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back; “An Open Letter to the Robotics Industry and our Communities, General Purpose Robots Should Not Be Weaponized” Boston Dynamics, 2022, https://www.bostondynamics.com/open-letter-opposing-weaponization-general-purpose-robots.Go to Footnotes

    [36] “What We Do,” Special Competitive Studies Project, 2022, https://www.scsp.ai/about/what-we-do/.

    Footnote36. “What We Do,” Special Competitive Studies Project, 2022, https://www.scsp.ai/about/what-we-do/.Go to Footnotes

    [37] “Mid-Decade Challenges to National Competitiveness,” Special Competitive Studies Project, September 2022, p. 43, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.

    Footnote37. “Mid-Decade Challenges to National Competitiveness,” Special Competitive Studies Project, September 2022, p. 43, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.Go to Footnotes

    [38] “Mid-Decade Challenges to National Competitiveness,” op cit., p. 24.

    Footnote38. “Mid-Decade Challenges to National Competitiveness,” op cit., p. 24.Go to Footnotes

    [39] “Mid-Decade Challenges to National Competitiveness,” op cit., pp. 17-18.

    Footnote39. “Mid-Decade Challenges to National Competitiveness,” op cit., pp. 17-18.Go to Footnotes

    [40] Ibid., p. 21; Barry van Wyk, “New backbones for ‘new infrastructure’ — China's multi-trillion dollar new digital landscape,” The China Project, May 19, 2022, https://thechinaproject.com/2022/05/19/new-backbones-for-new-infrastructure-chinas-multi-trillion-dollar-new-digital-landscape/.

    Footnote40. Ibid., p. 21; Barry van Wyk, “New backbones for ‘new infrastructure’ — China's multi-trillion dollar new digital landscape,” The China Project, May 19, 2022, https://thechinaproject.com/2022/05/19/new-backbones-for-new-infrastructure-chinas-multi-trillion-dollar-new-digital-landscape/.Go to Footnotes

    [41] “Mid-Decade Challenges to National Competitiveness,” op. cit.; “Strengthening the Global Semiconductor Supply Chain in an Uncertain Era,” Boston Consulting Group and Semiconductor Industry Association, April 2021, p. 5, https://www.semiconductors.org/wp-content/uploads/2021/05/BCG-x-SIA-Strengthening-the-Global-Semiconductor-Value-Chain-April-2021_1.pdf.

    Footnote41. “Mid-Decade Challenges to National Competitiveness,” op. cit.; “Strengthening the Global Semiconductor Supply Chain in an Uncertain Era,” Boston Consulting Group and Semiconductor Industry Association, April 2021, p. 5, https://www.semiconductors.org/wp-content/uploads/2021/05/BCG-x-SIA-Strengthening-the-Global-Semiconductor-Value-Chain-April-2021_1.pdf.Go to Footnotes

    [42] “Semiconductor manufacturing process,” Hitachi, 2022, http://www.hitachi-hightech.com/global/en/knowledge/semiconductor/room/manufacturing/process.html; “Computer chip,” Encyclopedia Britannica, Feb. 20, 2022, https://www.britannica.com/technology/computer-chip; and Troy Segal, “What Is a Semiconductor and How Is It Used?” Investopedia, Sept. 13, 2022, https://www.investopedia.com/terms/s/semiconductor.asp.

    Footnote42. “Semiconductor manufacturing process,” Hitachi, 2022, http://www.hitachi-hightech.com/global/en/knowledge/semiconductor/room/manufacturing/process.html; “Computer chip,” Encyclopedia Britannica, Feb. 20, 2022, https://www.britannica.com/technology/computer-chip; and Troy Segal, “What Is a Semiconductor and How Is It Used?” Investopedia, Sept. 13, 2022, https://www.investopedia.com/terms/s/semiconductor.asp.Go to Footnotes

    [43] “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 71; Carla Tardi, “What Is Moore's Law and Is It Still True?” Investopedia, July 17, 2022, https://www.investopedia.com/terms/m/mooreslaw.asp#:~:text=Moore's%20Law%20states%20that%20the,that%20this%20growth%20is%20exponential; Przemyslaw Kasiorek, “Moore's Law is Dead,” Builtin, Oct. 19, 2022, https://builtin.com/hardware/moores-law; and “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 71.

    Footnote43. “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 71; Carla Tardi, “What Is Moore's Law and Is It Still True?” Investopedia, July 17, 2022, https://www.investopedia.com/terms/m/mooreslaw.asp#:~:text=Moore's%20Law%20states%20that%20the,that%20this%20growth%20is%20exponential; Przemyslaw Kasiorek, “Moore's Law is Dead,” Builtin, Oct. 19, 2022, https://builtin.com/hardware/moores-law; and “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 71.Go to Footnotes

    [44] Vivek Wadhwa and Mauritz Kop, “Why Quantum Computing is Even More Dangerous than Artificial Intelligence,” Foreign Policy, Aug. 21, 2022, https://foreignpolicy.com/2022/08/21/quantum-computing-artificial-intelligence-ai-technology-regulation/.

    Footnote44. Vivek Wadhwa and Mauritz Kop, “Why Quantum Computing is Even More Dangerous than Artificial Intelligence,” Foreign Policy, Aug. 21, 2022, https://foreignpolicy.com/2022/08/21/quantum-computing-artificial-intelligence-ai-technology-regulation/.Go to Footnotes

    [45] Ibid.

    Footnote45. Ibid. Go to Footnotes

    [46] Jennifer A. Chandler et al., “Brain Computer Interfaces and Communication Disabilities: Ethical, Legal, and Social Aspects of Decoding Speech From the Brain,” Frontiers in Human Neuroscience, April 21, 2022, https://www.frontiersin.org/articles/10.3389/fnhum.2022.841035/full; Huang Lanlan, “Chinese hospital to treat depression with brain-computer interface system, ‘has passed ethical reviews,’” Global Times, Dec. 11, 2020, https://www.globaltimes.cn/content/1209689.shtml.

    Footnote46. Jennifer A. Chandler et al., “Brain Computer Interfaces and Communication Disabilities: Ethical, Legal, and Social Aspects of Decoding Speech From the Brain,” Frontiers in Human Neuroscience, April 21, 2022, https://www.frontiersin.org/articles/10.3389/fnhum.2022.841035/full; Huang Lanlan, “Chinese hospital to treat depression with brain-computer interface system, ‘has passed ethical reviews,’” Global Times, Dec. 11, 2020, https://www.globaltimes.cn/content/1209689.shtml.Go to Footnotes

    [47] William Hannas et al., “China's Advanced AI Research: Monitoring China's Paths to ‘General’ Artificial Intelligence,” Center for Security and Emerging Technology, July 2022, https://cset.georgetown.edu/publication/chinas-advanced-ai-research/.

    Footnote47. William Hannas et al., “China's Advanced AI Research: Monitoring China's Paths to ‘General’ Artificial Intelligence,” Center for Security and Emerging Technology, July 2022, https://cset.georgetown.edu/publication/chinas-advanced-ai-research/.Go to Footnotes

    [48] “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 134, 140.

    Footnote48. “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 134, 140.Go to Footnotes

    [49] “Pygmalion,” Encyclopedia Britannica, March 11, 2019, https://www.britannica.com/topic/Pygmalion; “Plot Summary,” BBC, 2022, https://www.bbc.co.uk/bitesize/guides/z8w7mp3/revision/1#:~:text=1%20of%205-,Frankenstein%20%2D%20Plot%20summary,Victor%20and%20mankind%20in%20general.

    Footnote49. “Pygmalion,” Encyclopedia Britannica, March 11, 2019, https://www.britannica.com/topic/Pygmalion; “Plot Summary,” BBC, 2022, https://www.bbc.co.uk/bitesize/guides/z8w7mp3/revision/1#:~:text=1%20of%205-,Frankenstein%20%2D%20Plot%20summary,Victor%20and%20mankind%20in%20general.Go to Footnotes

    [50] “Karl Capek and the Robot (Complete History),” History-Computer, Jan. 4, 2021, https://history-computer.com/karel-capek-and-the-robot-complete-history/.

    Footnote50. “Karl Capek and the Robot (Complete History),” History-Computer, Jan. 4, 2021, https://history-computer.com/karel-capek-and-the-robot-complete-history/.Go to Footnotes

    [51] Patrick Marshall, “Algorithms and Artificial Intelligence,” CQ Researcher, July 6, 2018, https://library.cqpress.com/cqresearcher/document.php?id=cqresrre2018070600&type=hitlist&num=0.

    Footnote51. Patrick Marshall, “Algorithms and Artificial Intelligence,” CQ Researcher, July 6, 2018, https://library.cqpress.com/cqresearcher/document.php?id=cqresrre2018070600&type=hitlist&num=0.Go to Footnotes

    [52] Michael Wooldridge, A Brief History of Artificial Intelligence: What it Is, Where We Are, and Where We Are Going (2021), pp. 13, 14.

    Footnote52. Michael Wooldridge, A Brief History of Artificial Intelligence: What it Is, Where We Are, and Where We Are Going (2021), pp. 13, 14.Go to Footnotes

    [53] “Timeline of Computer History,” Computer History Museum, 2022, https://www.computerhistory.org/timeline/computers/; “John Vincent Atanasoff,” Encyclopedia Britannica, Sept. 30, 2022, https://www.britannica.com/biography/John-V-Atanasoff; and “Atanasoff-Berry Computer,” Computer History Museum, accessed Nov. 15, 2022, https://www.computerhistory.org/timeline/1945/.

    Footnote53. “Timeline of Computer History,” Computer History Museum, 2022, https://www.computerhistory.org/timeline/computers/; “John Vincent Atanasoff,” Encyclopedia Britannica, Sept. 30, 2022, https://www.britannica.com/biography/John-V-Atanasoff; and “Atanasoff-Berry Computer,” Computer History Museum, accessed Nov. 15, 2022, https://www.computerhistory.org/timeline/1945/.Go to Footnotes

    [54] Joel Achenbach, “What ‘The Imitation Game didn't tell you about Turing's greatest triumph,” The Washington Post, Feb. 20, 2015, https://www.washingtonpost.com/national/health-science/what-imitation-game-didnt-tell-you-about-alan-turings-greatest-triumph/2015/02/20/ffd210b6-b606-11e4-9423-f3d0a1ec335c_story.html; William Poundstone, “John von Neumann,” Encyclopedia Britannica, Sept. 1, 2022, https://www.britannica.com/biography/John-von-Neumann.

    Footnote54. Joel Achenbach, “What ‘The Imitation Game didn't tell you about Turing's greatest triumph,” The Washington Post, Feb. 20, 2015, https://www.washingtonpost.com/national/health-science/what-imitation-game-didnt-tell-you-about-alan-turings-greatest-triumph/2015/02/20/ffd210b6-b606-11e4-9423-f3d0a1ec335c_story.html; William Poundstone, “John von Neumann,” Encyclopedia Britannica, Sept. 1, 2022, https://www.britannica.com/biography/John-von-Neumann.Go to Footnotes

    [55] Marshall, op. cit.; A.M Turing, “Computing Machinery and Intelligence,” Mind, Oct. 1, 1950, https://academic.oup.com/mind/article/LIX/236/433/986238; and Robert Epstein, “Can Machines Think?” AI Magazine, Summer 1992, https://www.aaai.org/ojs/index.php/aimagazine/article/view/993/911.

    Footnote55. Marshall, op. cit.; A.M Turing, “Computing Machinery and Intelligence,” Mind, Oct. 1, 1950, https://academic.oup.com/mind/article/LIX/236/433/986238; and Robert Epstein, “Can Machines Think?” AI Magazine, Summer 1992, https://www.aaai.org/ojs/index.php/aimagazine/article/view/993/911.Go to Footnotes

    [56] Ibid.; Stephen Johnson, “The Turing test: AI still hasn't passed the ‘imitation game,’” Big Think, March 7, 2022, https://bigthink.com/the-future/turing-test-imitation-game/; and Wooldridge, op. cit., pp. 23–24.

    Footnote56. Ibid.; Stephen Johnson, “The Turing test: AI still hasn't passed the ‘imitation game,’” Big Think, March 7, 2022, https://bigthink.com/the-future/turing-test-imitation-game/; and Wooldridge, op. cit., pp. 23–24.Go to Footnotes

    [57] Wooldridge, op. cit., p. 25.

    Footnote57. Wooldridge, op. cit., p. 25.Go to Footnotes

    [58] Ibid.

    Footnote58. Ibid. Go to Footnotes

    [59] “Joseph Weizenbaum, professor emeritus of computer science, 85,” MIT News, March 10, 2008, https://news.mit.edu/2008/obit-weizenbaum-0310; Marshall, op. cit.

    Footnote59. “Joseph Weizenbaum, professor emeritus of computer science, 85,” MIT News, March 10, 2008, https://news.mit.edu/2008/obit-weizenbaum-0310; Marshall, op. cit. Go to Footnotes

    [60] Clifford A. Pickover, Artificial Intelligence: An Illustrated History (2019), p. 79; Toby Walsh, Machines Behaving Badly (2022) pp. 43–44.

    Footnote60. Clifford A. Pickover, Artificial Intelligence: An Illustrated History (2019), p. 79; Toby Walsh, Machines Behaving Badly (2022) pp. 43–44.Go to Footnotes

    [61] “1951 — Snarc Maze Solver — Minsky / Edmonds (American),” Cyberneticzoo.com, Nov. 17, 2009, http://cyberneticzoo.com/mazesolvers/1951-maze-solver-minsky-edmonds-american/.

    Footnote61. “1951 — Snarc Maze Solver — Minsky / Edmonds (American),” Cyberneticzoo.com, Nov. 17, 2009, http://cyberneticzoo.com/mazesolvers/1951-maze-solver-minsky-edmonds-american/.Go to Footnotes

    [62] Pickover, op. cit., p. 94; J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” Formal Reasoning Group, Aug. 31, 1955, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

    Footnote62. Pickover, op. cit., p. 94; J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” Formal Reasoning Group, Aug. 31, 1955, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.Go to Footnotes

    [63] Ibid., “Artificial Intelligence (AI) Coined at Dartmouth,” Dartmouth College, 1956, https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth.

    Footnote63. Ibid., “Artificial Intelligence (AI) Coined at Dartmouth,” Dartmouth College, 1956, https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth.Go to Footnotes

    [64] Pickover, op. cit., p. 118; Wooldridge, op. cit., pp. 47-49.

    Footnote64. Pickover, op. cit., p. 118; Wooldridge, op. cit., pp. 47-49.Go to Footnotes

    [65] Wooldridge, op. cit., p. 36, pp. 61–62.

    Footnote65. Wooldridge, op. cit., p. 36, pp. 61–62.Go to Footnotes

    [66] Ibid., pp. 64–65.

    Footnote66. Ibid., pp. 64–65.Go to Footnotes

    [67] Ibid., p. 66; B.J. Copeland, “MYCIN,” Encyclopedia Britannica, Nov. 21, 2018, https://www.britannica.com/technology/MYCIN.

    Footnote67. Ibid., p. 66; B.J. Copeland, “MYCIN,” Encyclopedia Britannica, Nov. 21, 2018, https://www.britannica.com/technology/MYCIN.Go to Footnotes

    [68] Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence (2002), p. 2; “Fifth Generation Computer Systems,” Wikipedia, accessed Nov. 18, 2022, https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems; and “Strategic Computing Initiative,” Wikipedia, accessed Nov. 18, 2022, https://en.wikipedia.org/wiki/Strategic_Computing_Initiative.

    Footnote68. Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence (2002), p. 2; “Fifth Generation Computer Systems,” Wikipedia, accessed Nov. 18, 2022, https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems; and “Strategic Computing Initiative,” Wikipedia, accessed Nov. 18, 2022, https://en.wikipedia.org/wiki/Strategic_Computing_Initiative.Go to Footnotes

    [69] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014), pp. 9–10.

    Footnote69. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014), pp. 9–10.Go to Footnotes

    [70] P.W. Singer, “Drones Don't Die — A History Of Military Robotics,” HistoryNet, May 5, 2011, https://www.historynet.com/drones-dont-die-a-history-of-military-robotics/.

    Footnote70. P.W. Singer, “Drones Don't Die — A History Of Military Robotics,” HistoryNet, May 5, 2011, https://www.historynet.com/drones-dont-die-a-history-of-military-robotics/.Go to Footnotes

    [71] Pickover, op. cit., pp. 163-164.

    Footnote71. Pickover, op. cit., pp. 163-164.Go to Footnotes

    [72] Pickover, op. cit., p. 168; “Roomba Robot Vacuum Cleaner,” National Museum of American History, accessed Nov. 15, 2022, https://americanhistory.si.edu/collections/search/object/nmah_1448432#:~:text=On%20the%20market%20beginning%20in,the%20basic%20patent%20is%206%2C883%2C201; and Pickover, op. cit., p. 176.

    Footnote72. Pickover, op. cit., p. 168; “Roomba Robot Vacuum Cleaner,” National Museum of American History, accessed Nov. 15, 2022, https://americanhistory.si.edu/collections/search/object/nmah_1448432#:~:text=On%20the%20market%20beginning%20in,the%20basic%20patent%20is%206%2C883%2C201; and Pickover, op. cit., p. 176.Go to Footnotes

    [73] Thorin Klosowski, “Facial Recognition Is Everywhere. Here's What We Can Do About It,” Wirecutter, The New York Times, July 15, 2020, https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/.

    Footnote73. Thorin Klosowski, “Facial Recognition Is Everywhere. Here's What We Can Do About It,” Wirecutter, The New York Times, July 15, 2020, https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/.Go to Footnotes

    [74] “A Computer Called Watson,” IBM, accessed Nov. 15, 2022, https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/.

    Footnote74. “A Computer Called Watson,” IBM, accessed Nov. 15, 2022, https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/.Go to Footnotes

    [75] Lauren J. Young, “What Has IBM Watson Been Up to Since Winning ‘Jeopardy!’” Inverse, April 5, 2016, https://www.inverse.com/article/13630-what-has-ibm-watson-been-up-to-since-winning-jeopardy-5-years-ago.

    Footnote75. Lauren J. Young, “What Has IBM Watson Been Up to Since Winning ‘Jeopardy!’” Inverse, April 5, 2016, https://www.inverse.com/article/13630-what-has-ibm-watson-been-up-to-since-winning-jeopardy-5-years-ago.Go to Footnotes

    [76] “Executive Summary,” International Federation of Robotics, 2014, http://www.diag.uniroma1.it/~deluca/rob1_en/2014_WorldRobotics_ExecSummary.pdf.

    Footnote76. “Executive Summary,” International Federation of Robotics, 2014, http://www.diag.uniroma1.it/~deluca/rob1_en/2014_WorldRobotics_ExecSummary.pdf.Go to Footnotes

    [77] “Timeline of Computer History: 2011,” Computer History Museum, 2022, https://www.computerhistory.org/timeline/2011/; “Voice Assistant Timeline,” Voicebot.ai, July 14, 2017, https://voicebot.ai/2017/07/14/timeline-voice-assistants-short-history-voice-revolution/.

    Footnote77. “Timeline of Computer History: 2011,” Computer History Museum, 2022, https://www.computerhistory.org/timeline/2011/; “Voice Assistant Timeline,” Voicebot.ai, July 14, 2017, https://voicebot.ai/2017/07/14/timeline-voice-assistants-short-history-voice-revolution/.Go to Footnotes

    [78] “Global industrial robot sales doubled over the past five years,” International Federation of Robotics, Oct. 18, 2018, https://ifr.org/ifr-press-releases/news/global-industrial-robot-sales-doubled-over-the-past-five-years.

    Footnote78. “Global industrial robot sales doubled over the past five years,” International Federation of Robotics, Oct. 18, 2018, https://ifr.org/ifr-press-releases/news/global-industrial-robot-sales-doubled-over-the-past-five-years.Go to Footnotes

    [79] Avery Hartmans, “How Google's self-driving car project rose from a crazy idea to a top contender in the race toward a driverless future,” Business Insider, Oct. 23, 2016, https://www.businessinsider.com/google-driverless-car-history-photos-2016-10.

    Footnote79. Avery Hartmans, “How Google's self-driving car project rose from a crazy idea to a top contender in the race toward a driverless future,” Business Insider, Oct. 23, 2016, https://www.businessinsider.com/google-driverless-car-history-photos-2016-10.Go to Footnotes

    [80] “Taking our next step in the City by the Bay,” Waymo, March 30, 2022, https://blog.waymo.com/2022/03/taking-our-next-step-in-city-by-bay.html; Cade Metz, “Stuck on the Streets of San Francisco in a Driverless Car,” The New York Times, Oct. 14, 2022, https://www.nytimes.com/2022/09/28/technology/driverless-cars-san-francisco.html.

    Footnote80. “Taking our next step in the City by the Bay,” Waymo, March 30, 2022, https://blog.waymo.com/2022/03/taking-our-next-step-in-city-by-bay.html; Cade Metz, “Stuck on the Streets of San Francisco in a Driverless Car,” The New York Times, Oct. 14, 2022, https://www.nytimes.com/2022/09/28/technology/driverless-cars-san-francisco.html.Go to Footnotes

    [81] Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

    Footnote81. Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.Go to Footnotes

    [82] Walsh, op. cit., p. 135.

    Footnote82. Walsh, op. cit., p. 135.Go to Footnotes

    [83] “Civil Law Rules on Robotics,” European Parliament, Feb. 16, 2017, https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html; Walsh, op. cit., pp. 115–116.

    Footnote83. “Civil Law Rules on Robotics,” European Parliament, Feb. 16, 2017, https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html; Walsh, op. cit., pp. 115–116.Go to Footnotes

    [84] Ben Wolford, “What is GDPR, the EU's new data protection law?” GDPR.EU, 2022, https://gdpr.eu/what-is-gdpr/; Luke Irwin, “How the GDPR affects cookie policies,” IT Governance, April 12, 2022, https://www.itgovernance.eu/blog/en/how-the-gdpr-affects-cookie-policies.

    Footnote84. Ben Wolford, “What is GDPR, the EU's new data protection law?” GDPR.EU, 2022, https://gdpr.eu/what-is-gdpr/; Luke Irwin, “How the GDPR affects cookie policies,” IT Governance, April 12, 2022, https://www.itgovernance.eu/blog/en/how-the-gdpr-affects-cookie-policies.Go to Footnotes

    [85] Walsh, op. cit., p. 64; “Let Jasper Write Your Marketing Copy For Free,” op. cit.

    Footnote85. Walsh, op. cit., p. 64; “Let Jasper Write Your Marketing Copy For Free,” op. cit. Go to Footnotes

    [86] Anthony Cuthbertson, “Elon Musk says ‘epic’ Tesla robot Optimus will be unveiled at AI event,” Independent, Yahoo News, June 3, 2022, https://news.yahoo.com/elon-musk-says-epic-tesla-095813684.html.

    Footnote86. Anthony Cuthbertson, “Elon Musk says ‘epic’ Tesla robot Optimus will be unveiled at AI event,” Independent, Yahoo News, June 3, 2022, https://news.yahoo.com/elon-musk-says-epic-tesla-095813684.html.Go to Footnotes

    [87] Andrew J. Hawkins, “What to expect from Tesla's AI Day event,” The Verge, Sept. 28, 2022, https://www.theverge.com/2022/9/28/23374494/tesla-event-ai-day-robot-elon-musk-rumors-announcements-news; “Report: Nearly 400 crashes by ‘self-driving’ cars in the U.S.,” Aljazeera, June 15, 2022, https://www.aljazeera.com/economy/2022/6/15/report-nearly-400-crashes-by-self-driving-cars-in-the-us; and Metz, op. cit.

    Footnote87. Andrew J. Hawkins, “What to expect from Tesla's AI Day event,” The Verge, Sept. 28, 2022, https://www.theverge.com/2022/9/28/23374494/tesla-event-ai-day-robot-elon-musk-rumors-announcements-news; “Report: Nearly 400 crashes by ‘self-driving’ cars in the U.S.,” Aljazeera, June 15, 2022, https://www.aljazeera.com/economy/2022/6/15/report-nearly-400-crashes-by-self-driving-cars-in-the-us; and Metz, op. cit. Go to Footnotes

    [88] “Blueprint for an AI Bill of Rights,” op. cit.

    Footnote88. “Blueprint for an AI Bill of Rights,” op. cit. Go to Footnotes

    [89] Ibid., p. 48.

    Footnote89. Ibid., p. 48.Go to Footnotes

    [90] Lima, op. cit.

    Footnote90. Lima, op. cit. Go to Footnotes

    [91] Khari Johnson, “Biden's AI Bill of Rights Is Toothless Against Big Tech,” Wired, Oct. 4, 2022, https://www.wired.com/story/bidens-ai-bill-of-rights-is-toothless-against-big-tech/.

    Footnote91. Khari Johnson, “Biden's AI Bill of Rights Is Toothless Against Big Tech,” Wired, Oct. 4, 2022, https://www.wired.com/story/bidens-ai-bill-of-rights-is-toothless-against-big-tech/.Go to Footnotes

    [92] Susan Ariel Aaronson, “Biden's New AI Policy Falls Short on a Key Problem,” Barron's, Oct. 12, 2022, https://www.barrons.com/articles/biden-ai-blueprint-trust-tech-51665582525.

    Footnote92. Susan Ariel Aaronson, “Biden's New AI Policy Falls Short on a Key Problem,” Barron's, Oct. 12, 2022, https://www.barrons.com/articles/biden-ai-blueprint-trust-tech-51665582525.Go to Footnotes

    [93] Franco Ordoñez, “Biden has $52 billion for semiconductors. Today, work begins to spend that windfall,” NPR, Oct. 6, 2022, https://www.npr.org/2022/10/06/1126947495/biden-has-52-billion-for-semiconductors-today-work-begins-to-spend-that-windfall.

    Footnote93. Franco Ordoñez, “Biden has $52 billion for semiconductors. Today, work begins to spend that windfall,” NPR, Oct. 6, 2022, https://www.npr.org/2022/10/06/1126947495/biden-has-52-billion-for-semiconductors-today-work-begins-to-spend-that-windfall.Go to Footnotes

    [94] Jacob Knutson, “Biden signs $280 billion chip funding bill,” Axios, Aug. 9, 2022, https://www.axios.com/2022/08/09/biden-chips-bill-signing.

    Footnote94. Jacob Knutson, “Biden signs $280 billion chip funding bill,” Axios, Aug. 9, 2022, https://www.axios.com/2022/08/09/biden-chips-bill-signing.Go to Footnotes

    [95] David Ignatius, “What if the United States loses the AI race against China?” The Washington Post, Sept. 13, 2022, https://www.washingtonpost.com/opinions/2022/09/13/artificial-intelligence-ai-high-tech-race-with-china/.

    Footnote95. David Ignatius, “What if the United States loses the AI race against China?” The Washington Post, Sept. 13, 2022, https://www.washingtonpost.com/opinions/2022/09/13/artificial-intelligence-ai-high-tech-race-with-china/.Go to Footnotes

    [96] “Chips for America Act & FABS Act,” Semiconductor Industry Association, 2022, https://www.semiconductors.org/chips/.

    Footnote96. “Chips for America Act & FABS Act,” Semiconductor Industry Association, 2022, https://www.semiconductors.org/chips/.Go to Footnotes

    [97] E.J. Dionne Jr., “Opinion: The chips bill means the Era of Hands-Off Government is over,” The Washington Post, July 27, 2022, https://www.washingtonpost.com/opinions/2022/07/27/chips-funding-bill-big-government/.

    Footnote97. E.J. Dionne Jr., “Opinion: The chips bill means the Era of Hands-Off Government is over,” The Washington Post, July 27, 2022, https://www.washingtonpost.com/opinions/2022/07/27/chips-funding-bill-big-government/.Go to Footnotes

    [98] Ana Swanson, “Biden Administration Releases Plan for $50 Billion Investment in Chips,” The New York Times, Sept. 6, 2022, https://www.nytimes.com/2022/09/06/business/economy/biden-tech-chips.html.

    Footnote98. Ana Swanson, “Biden Administration Releases Plan for $50 Billion Investment in Chips,” The New York Times, Sept. 6, 2022, https://www.nytimes.com/2022/09/06/business/economy/biden-tech-chips.html.Go to Footnotes

    [99] Ana Swanson, “Biden Administration Clamps Down on China's Access to Chip Technology,” The New York Times, Oct. 7, 2022, https://www.nytimes.com/2022/10/07/business/economy/biden-chip-technology.html.

    Footnote99. Ana Swanson, “Biden Administration Clamps Down on China's Access to Chip Technology,” The New York Times, Oct. 7, 2022, https://www.nytimes.com/2022/10/07/business/economy/biden-chip-technology.html.Go to Footnotes

    [100] Ibid.

    Footnote100. Ibid. Go to Footnotes

    [101] Ted Lieu, “Op-ed: Facial recognition technology victimizes people of color. It must be regulated,” Office of Congressman Ted Lieu, Sept. 29, 2022, https://lieu.house.gov/media-center/editorials/op-ed-facial-recognition-technology-victimizes-people-color-it-must-be.

    Footnote101. Ted Lieu, “Op-ed: Facial recognition technology victimizes people of color. It must be regulated,” Office of Congressman Ted Lieu, Sept. 29, 2022, https://lieu.house.gov/media-center/editorials/op-ed-facial-recognition-technology-victimizes-people-color-it-must-be.Go to Footnotes

    [102] Jake Laperruque, “Limiting Face Recognition Surveillance: Progress and Paths Forward,” Center for Democracy & Technology, Aug. 23, 2022, https://cdt.org/insights/limiting-face-recognition-surveillance-progress-and-paths-forward/; Meghna Chakrabarti, “San Francisco Bans Facial Recognition Tech Over Surveillance, Bias Concerns,” WBUR, May 16, 2019, https://www.wbur.org/onpoint/2019/05/16/san-francisco-facial-recognition-technology; and Katie Lannan, “Somerville Bans Government Use Of Facial Recognition Tech,” WBUR, June 28, 2019, https://www.wbur.org/news/2019/06/28/somerville-bans-government-use-of-facial-recognition-tech.

    Footnote102. Jake Laperruque, “Limiting Face Recognition Surveillance: Progress and Paths Forward,” Center for Democracy & Technology, Aug. 23, 2022, https://cdt.org/insights/limiting-face-recognition-surveillance-progress-and-paths-forward/; Meghna Chakrabarti, “San Francisco Bans Facial Recognition Tech Over Surveillance, Bias Concerns,” WBUR, May 16, 2019, https://www.wbur.org/onpoint/2019/05/16/san-francisco-facial-recognition-technology; and Katie Lannan, “Somerville Bans Government Use Of Facial Recognition Tech,” WBUR, June 28, 2019, https://www.wbur.org/news/2019/06/28/somerville-bans-government-use-of-facial-recognition-tech.Go to Footnotes

    [103] Martin Austermuhle, “D.C. attorney general introduces bill to ban ‘algorithmic discrimination,” NPR, Dec. 10, 2021, https://www.npr.org/local/305/2021/12/10/1062991462/d-c-attorney-general-introduces-bill-to-ban-algorithmic-discrimination.

    Footnote103. Martin Austermuhle, “D.C. attorney general introduces bill to ban ‘algorithmic discrimination,” NPR, Dec. 10, 2021, https://www.npr.org/local/305/2021/12/10/1062991462/d-c-attorney-general-introduces-bill-to-ban-algorithmic-discrimination.Go to Footnotes

    [104] “B24-558, the ‘Stop Discrimination by Algorithms Act of 2021,’” Center for Democracy and Technology, Oct. 5, 2022, https://cdt.org/wp-content/uploads/2022/10/CDT-Written-Testimony-on-B24-558-Stop-Discrimination-by-Algorithms-Act.pdf.

    Footnote104. “B24-558, the ‘Stop Discrimination by Algorithms Act of 2021,’” Center for Democracy and Technology, Oct. 5, 2022, https://cdt.org/wp-content/uploads/2022/10/CDT-Written-Testimony-on-B24-558-Stop-Discrimination-by-Algorithms-Act.pdf.Go to Footnotes

    [105] John L. Culhane, Jr., “Bill to regulate use of algorithms under consideration by D.C. Council,” Consumer Finance Monitor, Oct. 12, 2022, https://www.consumerfinancemonitor.com/2022/10/12/bill-to-regulate-use-of-algorithms-under-consideration-by-d-c-council/#:~:text=A%20bill%20now%20being%20considered,regarding%20the%20individuals%20to%20whom; “DC Chamber of Commerce Provides Testimony,” op. cit.

    Footnote105. John L. Culhane, Jr., “Bill to regulate use of algorithms under consideration by D.C. Council,” Consumer Finance Monitor, Oct. 12, 2022, https://www.consumerfinancemonitor.com/2022/10/12/bill-to-regulate-use-of-algorithms-under-consideration-by-d-c-council/#:~:text=A%20bill%20now%20being%20considered,regarding%20the%20individuals%20to%20whom; “DC Chamber of Commerce Provides Testimony,” op. cit. Go to Footnotes

    [106] “Proposal for a Regulation,” op. cit.

    Footnote106. “Proposal for a Regulation,” op. cit. Go to Footnotes

    [107] Josephine Wolff, “The EU's New Proposed Rules on A.I. Are Missing Something,” Slate, April 30, 2021, https://slate.com/technology/2021/04/eu-proposed-rules-artificial-intelligence.html.

    Footnote107. Josephine Wolff, “The EU's New Proposed Rules on A.I. Are Missing Something,” Slate, April 30, 2021, https://slate.com/technology/2021/04/eu-proposed-rules-artificial-intelligence.html.Go to Footnotes

    [108] Green and Kak, op. cit.; Ben Green, “The flaws of policies requiring human oversight of government algorithms,” Computer Law & Security Review, 2022, https://www.sciencedirect.com/science/article/pii/S0267364922000292.

    Footnote108. Green and Kak, op. cit.; Ben Green, “The flaws of policies requiring human oversight of government algorithms,” Computer Law & Security Review, 2022, https://www.sciencedirect.com/science/article/pii/S0267364922000292.Go to Footnotes

    [109] Pieter Haeck, “Ex-Google boss slams transparency rules in Europe's AI bill,” Politico, May 31, 2021, https://www.politico.eu/article/ex-google-boss-eu-risks-setback-by-demanding-transparent-ai/.

    Footnote109. Pieter Haeck, “Ex-Google boss slams transparency rules in Europe's AI bill,” Politico, May 31, 2021, https://www.politico.eu/article/ex-google-boss-eu-risks-setback-by-demanding-transparent-ai/.Go to Footnotes

    [110] “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 24.

    Footnote110. “Mid-Decade Challenges to National Competitiveness,” op. cit., p. 24.Go to Footnotes

    [111] Evangelos Razis, “Europe's Gamble on AI Regulation,” U.S. Chamber of Commerce, 2021, https://www.uschamber.com/technology/europe-s-gamble-ai-regulation.

    Footnote111. Evangelos Razis, “Europe's Gamble on AI Regulation,” U.S. Chamber of Commerce, 2021, https://www.uschamber.com/technology/europe-s-gamble-ai-regulation.Go to Footnotes

    [112] Leonid Bershidsky, “Europe's Privacy Rules Hurt Small Firms, Not Tech Giants,” Bloomberg, July 25, 2019, https://www.bloomberg.com/opinion/articles/2019-07-26/europe-s-privacy-rules-hurt-small-firms-not-google-and-facebook#xj4y7vzkg.

    Footnote112. Leonid Bershidsky, “Europe's Privacy Rules Hurt Small Firms, Not Tech Giants,” Bloomberg, July 25, 2019, https://www.bloomberg.com/opinion/articles/2019-07-26/europe-s-privacy-rules-hurt-small-firms-not-google-and-facebook#xj4y7vzkg.Go to Footnotes

    [113] “Joint Statement on Lethal Autonomous Weapons Systems,” First Committee, 77th U.N. General Assembly, Oct. 21, 2022, https://reachingcriticalwill.org/images/documents/Disarmament-fora/1com/1com22/statements/21Oct_LAWS.pdf; and “State positions,” Automated Decision Research, https://automatedresearch.org/state-positions/?_state_position_negotiation=yes.

    Footnote113. “Joint Statement on Lethal Autonomous Weapons Systems,” First Committee, 77th U.N. General Assembly, Oct. 21, 2022, https://reachingcriticalwill.org/images/documents/Disarmament-fora/1com/1com22/statements/21Oct_LAWS.pdf; and “State positions,” Automated Decision Research, https://automatedresearch.org/state-positions/?_state_position_negotiation=yes.Go to Footnotes

    [114] “2022 Convention on Certain Conventional Weapons,” Reaching Critical Will, 2022, https://reachingcriticalwill.org/disarmament-fora/ccw/2022; Mary Wareham, Twitter post, Nov. 19, 2022, https://twitter.com/marywareham/status/1593863065770106880?s=46&t=LKVb045HlZXVS5c4AOufbg; “Meeting of the High Contracting Parties to the Convention …,” Convention on Certain Conventional Weapons, Draft Final Report, Nov. 16–18, 2022.

    Footnote114. “2022 Convention on Certain Conventional Weapons,” Reaching Critical Will, 2022, https://reachingcriticalwill.org/disarmament-fora/ccw/2022; Mary Wareham, Twitter post, Nov. 19, 2022, https://twitter.com/marywareham/status/1593863065770106880?s=46&t=LKVb045HlZXVS5c4AOufbg; “Meeting of the High Contracting Parties to the Convention …,” Convention on Certain Conventional Weapons, Draft Final Report, Nov. 16–18, 2022.Go to Footnotes

    [115] Émile Torres, “Opinion: How AI could accidentally extinguish humankind,” The Washington Post, Aug. 31, 2022, https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/; Vincent C. Müller and Nick Bostrom, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” Fundamental Issues of Artificial Intelligence, 2014, https://nickbostrom.com/papers/survey.pdf.

    Footnote115. Émile Torres, “Opinion: How AI could accidentally extinguish humankind,” The Washington Post, Aug. 31, 2022, https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/; Vincent C. Müller and Nick Bostrom, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” Fundamental Issues of Artificial Intelligence, 2014, https://nickbostrom.com/papers/survey.pdf.Go to Footnotes

    [116] Wadhwa and Kop, op. cit.

    Footnote116. Wadhwa and Kop, op. cit. Go to Footnotes

    Go to top

    About the Author

    Sarah Glazer

    Sarah Glazer is a New York-based freelancer who contributes regularly to CQ Researcher. Her articles on health, education and social-policy issues also have appeared in The New York Times and The Washington Post. Her recent CQ Researcher reports include “Expertise Under Assault” and “Endangered Species.” She graduated from the University of Chicago with a B.A. in American history. Her most recent report for CQ Researcher was on organ trafficking.

    Go to top



    Document APA Citation
    Glazer, S. (2022, November 25). The future of artificial intelligence. CQ researcher, 32, 1-31. http://library.cqpress.com/
    Document ID: cqresrre2022112500
    Document URL: http://library.cqpress.com/cqresearcher/cqresrre2022112500
    ISSUE TRACKER for Related Reports
    Artificial Intelligence
    Nov. 25, 2022  The Future of Artificial Intelligence
    Jul. 06, 2018  Algorithms and Artificial Intelligence
    Sep. 25, 2015  Robotics and the Economy
    Jan. 23, 2015  Robotic Warfare
    Apr. 22, 2011  Artificial Intelligence
    Nov. 14, 1997  Artificial Intelligence
    Aug. 16, 1985  Artificial Intelligence
    May 14, 1982  The Robot Revolution
    BROWSE RELATED TOPICS:
    Bilateral and Regional Trade
    Campaigns and Elections
    Civil Rights and Civil Liberty Issues
    Cold War
    Computers and the Internet
    Congress Actions
    Consumer Behavior
    Consumer Protection and Product Liability
    Engineering
    Equal Employment Opportunity & Discrimination
    General International Relations
    International Law and Agreements
    Internet and Social Media
    Party Politics
    Party Politics
    Powers and History of the Presidency
    Protest Movements
    Regional Political Affairs: East Asia and the Pacific
    Regional Political Affairs: Russia and the Former Soviet Union
    Regulation and Deregulation
    Technology
    World Trade Organization (WTO)
    World War II
    READER COMMENTS
    (0)
    No comments on this report yet.
    Comment on this Report
    • Feedback |
    • Suggest a Topic |
    • General Terms of Service |
    • Copyright Notice and Takedown Policy |
    • Masthead |
    • Privacy Policy |
    • CCPA – Do Not Sell My Personal Information |
    • CCPA
    ©2023, CQ Press, An Imprint of SAGE Publishing. All Rights Reserved. CQ Press is a registered trademark of Congressional Quarterly Inc.
    FEEDBACKClose

    Suggest a topic here.

    Take our survey to help us improve CQ Researcher!

    Feedback survey