How to Stay Smart in a Smart World

Why Human Intelligence Still Beats Algorithms

STAYING IN CHARGE: How do we navigate a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards”?

“Anyone worried about the age of AI will sleep better after reading this intelligent account” about the limits and dangers of technology (Publishers Weekly).

Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.

Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.
"Anyone worried about the age of AI will sleep better after reading this intelligent account."
Publishers Weekly

“A seriously compelling, eye opening, and well researched investigation.”
Library Journal

“Persuasive.”
The Times UK 

“Gigerenzer deftly explains the limits and dangers of technology and AI.”
New Scientist

"Essential reading for anyone exposed to technology that shapes our behavior rather than meeting our needs. In other words, it is essential reading for all of us.”
Morning Star
Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, Director Emeritus at the Max Planck Institute for Human Development, and Partner of Simply Rational—the Institute for Decisions. He is the author of Calculated Risks, Gut Feelings, Risk Savvy, and How to Stay Smart in a Smart World (MIT Press).
Introduction ix
Part I: The Human Affair with AI
1 Is True Love Just a Click Away? 3
2 What AI Is Best At: The Stable-World Principle 21
3 Machines Influence How We Think about Intelligence 41
4 Are Self-Driving Cars Just Down the Road? 49
5 Common Sense and AI 73
6 One Data Point Can Beat Big Data 93
Part II: High Stakes
7 Transparency 113
8 Sleepwalking into Surveillance 139
9 The Psychology of Getting Users Hooked 173
10 Safety and Self-Control 187
11 Fact or Fake? 199
Acknowledgments 227
Notes 229
Bibliography 255
Index 285

About

STAYING IN CHARGE: How do we navigate a world populated by algorithms that beat us in chess, find us romantic partners, and tell us to “turn right in 500 yards”?

“Anyone worried about the age of AI will sleep better after reading this intelligent account” about the limits and dangers of technology (Publishers Weekly).

Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all seem to agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms.

Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.

Reviews

"Anyone worried about the age of AI will sleep better after reading this intelligent account."
Publishers Weekly

“A seriously compelling, eye opening, and well researched investigation.”
Library Journal

“Persuasive.”
The Times UK 

“Gigerenzer deftly explains the limits and dangers of technology and AI.”
New Scientist

"Essential reading for anyone exposed to technology that shapes our behavior rather than meeting our needs. In other words, it is essential reading for all of us.”
Morning Star

Author

Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam, Director Emeritus at the Max Planck Institute for Human Development, and Partner of Simply Rational—the Institute for Decisions. He is the author of Calculated Risks, Gut Feelings, Risk Savvy, and How to Stay Smart in a Smart World (MIT Press).

Table of Contents

Introduction ix
Part I: The Human Affair with AI
1 Is True Love Just a Click Away? 3
2 What AI Is Best At: The Stable-World Principle 21
3 Machines Influence How We Think about Intelligence 41
4 Are Self-Driving Cars Just Down the Road? 49
5 Common Sense and AI 73
6 One Data Point Can Beat Big Data 93
Part II: High Stakes
7 Transparency 113
8 Sleepwalking into Surveillance 139
9 The Psychology of Getting Users Hooked 173
10 Safety and Self-Control 187
11 Fact or Fake? 199
Acknowledgments 227
Notes 229
Bibliography 255
Index 285
  • More Websites from
    Penguin Random House
  • Common Reads
  • Library Marketing