“Companies Deploying AI Have a Powerful Role in Shaping the Kind of AI That Gets Developed.”
While acknowledging AI’s importance and potential, Arvind Narayanan and Sayash Kapoor warn of its snake oil aspects and support their warning with case histories, proofs, and suggested corrective actions. Their book meets today’s needs with deep information and practical steps for AI users. It covers predictive and generative AI and refutes overhyped myths while explaining when you can trust AI and when you cannot. The authors told getAbstract that AI’s value varies greatly from one firm to another, depending on its uses and input data. They suggest careful pilot projects to test its accuracy and utility in your company’s context.
Narayanan and Kapoor explain how predictive AI fails users who draw the wrong conclusions from predictions, game AI systems, over-rely on AI without oversight, use faulty training data, and succumb to AI’s tendency to exacerbate inequities. If you use predictive AI, you need scientific evidence that its predictions are accurate and feasible. You should also maintain processes that enable people to challenge and change AI’s incorrect decisions. The authors also outline generative AI’s hazards, including developers’ lack of transparency. They call for oversight, better public information, and fair labor practices. However, they make users, not developers, responsible for correct deployment, and urge companies to establish clear guidelines. They caution that AI content may need so much fact-checking that producing it manually may be more efficient. And they open the door for more government regulation, if required.
The authors’ informed insights go far beyond what unassuming users know.
For example, they note that AI developers don’t protect their hired content monitors in less-affluent countries from an onslaught of hatred, lies, and porn. They urge the industry to support these at-risk employees. The average consumer has no idea about this problem. Narayanan and Kapoor also alert you that machine learning models can’t determine truth or falsity, so you must. And, they note, social media algorithms award excitement, not truth, value, or utility. The prevalence of harmful social media content happens by design, not accident, and that design depends on AI.
AI Snake Oil is unequivocal that strengthening democratic institutions is a crucial way to control AI’s risks. Instead of worrying about sci-fi problems, like an AI takeover, Narayan and Kapoor say savvy consumers should worry about giving AI users too much power.
The real threat is bad actors, not bad AI.
To stop malicious manipulators, enable institutions that push back against their deliberate lies. Businesses should ensure that their AI usage aligns with their values and does not just follow the money. Narayanan and Kapoor cite cases where AI has violated ethical values in customer service and healthcare and charge private enterprise with being responsible for its employees and their tools. As they told getAbstract, “Companies deploying AI have a powerful role in shaping the kind of AI that gets developed.”
For such perspicacity about the intersection of knowledge, technology, morality, and business, we are proud to honor AI Snake Oil with getAbstract’s International Book Award in the Business Impact category.