Introduction
A humanoid robot named Nina is displayed at the University of Grenoble in France in November 2017. Artificial intelligence (AI) enables Nina to “learn” as she encounters different situations. Critics of the technology worry that robots could someday be independent of their human creators, but AI's defenders say such fears are overblown. (Cover: AFP/Getty Images/Jean-Pierre Clatot)
|
Algorithms increasingly shape modern life, helping Wall Street to decide stock trades, Netflix to recommend movies and judges to dispense justice. But critics say algorithms — the seemingly inscrutable computational tools that help give artificial intelligence (AI) the ability to “think” and “learn” — can lead to skewed results and sometimes social harm. AI might help mortgage companies decide whom to lend to, but qualified borrowers can be rejected if the underlying algorithms are faulty. Companies might use AI to screen job applicants, but skilled talent can be turned away if the algorithms reflect racial or gender bias. Moreover, the use of algorithms is raising difficult questions about who — if anyone — is liable when AI results in injury. The technology is even stirring fears of an AI apocalypse in which computers become so powerful and autonomous that they threaten humankind. Some experts want the federal government to strictly regulate AI to ensure it is not misused, but critics fear more rules would stifle the technology.
|
|