In the twenty-first century, technology radically affects the lives of everyone on the planet. Frank Pasquale, a leading commentator on artificial intelligence (AI) law and policy, has been thinking about these effects for decades. From his 2002 article “Two Concepts of Immortality” to his latest book New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020), Pasquale has explored the myriad ways that technological advances affect how we work, what media we consume, how law is made and enforced, and much more. He brings a refreshingly philosophical, even spiritual, perspective to these discussions, while concretely addressing the problems that arise when robots advance into hospitals, schools, and militaries.
A professor at Brooklyn Law School, Pasquale has taught topics ranging from intellectual property to health-care finance and regulation. Pasquale’s book The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) has been recognized internationally as a landmark work of law and political economy, and has been translated into several languages. The journal Big Data and Society recently published a symposium on it. Pasquale’s contributions are not only scholarly, however. He has also testified before House and Senate committees on algorithms and big data, and currently chairs the Subcommittee on Privacy, Confidentiality, and Security, part of the National Committee on Vital and Health Statistics. The following interview was conducted by email and edited for clarity and length.
LAWRENCE JOSEPH: New Laws of Robotics: Defending Human Expertise in the Age of AI begins where your monumental 2015 book The Black Box Society: The Secret Algorithms That Control Money and Information left off. The two books capture our twenty-first century of techno-capital, law, and political economy with unmatched breadth, depth, ambition, and vision. Could you describe the connections you see between them?
FRANK PASQUALE: Black Box Society was fueled by a simple question: How did they get away with it? “They” here refers to the massive finance and technology companies, now at the commanding heights of the global economy, that have done so much damage with so little accountability. I did not think the book would have much of an audience beyond lawyers. But I was fortunate, and readers in business, engineering, media, and many other fields told me how much it resonated with them.
Through these conversations, I became convinced that we needed more than a critique of tech and finance to build a better world. We needed a compelling story about the kind of future we want. That was my blueprint for New Laws of Robotics: to explore positive narratives of innovation in fields such as health care and education, where AI and robotics are enhancing human jobs and interactions.
But there are so many troubling uses of AI and robotics out there that the critical spirit of Black Box Society remains. The stakes have gotten higher. People are being hired and fired based on AI analysis of their voice or social media. We keep seeing AI (and even robots) that try to pass off mechanistic mimicry for real human judgment and emotion. Deceptive simulation is a major concern of New Laws of Robotics.
LJ: Why is AI’s simulation of humans so worrisome?
FP: Let’s start with an example we can all relate to—a classroom. Some robots can be great for kids. For example, a “Dragonbot” incorporates a cell-phone screen and sound in a plush dragon toy. In some ways, it’s like a talking doll, but it’s connected to databases so it can say much more. To the extent the child understands the Dragonbot as a toy or a tool, they’re developing an accurate understanding of technology, and the distinction between things and people. You can stuff the Dragonbot in your closet for a few days, perhaps never use it again, no problem. You can take out its “brain” (the cell phone that animates it), and that’s fine, too.
Now think about a child growing up with a mechanical peer or a mechanical teacher in her classroom. What’s strange and alienating there is that these roles have always been occupied by people with needs, goals, fears, and hopes rooted in a common physiological experience—that of having a human body and mind. There is a fundamental equality among them, a common dignity grounded in our common fragility. The Dragonbot isn’t hurt if you ignore it. Your classmate might be, and part of learning to be human is developing a corresponding sense of care and regard.
Now, a child could try to develop that same sense of empathy for a robot, such as the humanoid “Adam” described in Ian McEwan’s novel Machines Like Me. But that seems misdirected. Whatever the robot “feels” (or, to be more accurate, however it simulates feeling) is a product of its programming. It could be programmed otherwise. By contrast, we humans can’t be “programmed” not to care about being neglected.
There are other dimensions of fakery that also concern me. “Deepfakes” are AI-generated video, text, and images that simulate events, documents, and people. Even when they are debunked, they can cause confusion at critical moments. The images of faces churned out by sites like “This Person Does Not Exist” could be used as profile pictures for armies of social-media accounts—which in turn can fake groundswells of enthusiasm or rage, especially when coupled with AI speech-generating models like GPT-3. They can also swamp hashtags on Twitter, making it hard for people talking about a topic like #BlackLivesMatter to communicate collectively. My book proposes limits on deceptive mimicry, including bans on AI and robotics that counterfeit certain human qualities. Ideally, the “new laws of robotics” developed in the book would guide policy making generally.
LJ: How would you describe these “new laws”?
FP: I should start with the most famous set of “Laws of Robotics” in the English-speaking world—Isaac Asimov’s. His science fiction had an enormous influence, and shows how much imaginative writing can inform and shape fields far beyond itself. He set several stories about robotics in mid-twenty-first century America, about a hundred years after he was writing. He assumed the development of robots indistinguishable from humans, as in his story “Evidence,” whose plot revolves around a politician’s struggle to demonstrate he is not a robot. In the course of another short story, Asimov introduced three laws of robotics that stop robots from harming humans and require them to obey people. Since Asimov wrote, hundreds of authors and institutions have developed standards more detailed than these basic principles. There are now myriad potential ethical and legal restraints on AI, many of which I cite in the book. But Asimov’s clarity and simplicity appealed to me, as a way of boiling them all down into a memorable set of principles.
I also wanted to bring in a political economy perspective, so we can develop institutions that keep human beings durably in control of technology. That’s what led to the first new law of robotics in my book: robotic systems and AI should complement professionals, not replace them.
LJ: In what ways? Why are professions so important to your vision?
FP: At best, professions distribute power over a sector to labor, so it’s not just governed by capital or politicians. If the goal of technology is to create a robot nurse or doctor, for example, you can be assured that a few powerful tech firms are going to have an outsized say about the future of medicine. You need immense amounts of data and investment to do it, and few firms have that kind of money and power. If the goal is to create tech that assists doctors and nurses, that’s a much more democratic structure for the economy. More diverse AI developers can play a role, and they’re more likely to work as partners with (rather than bosses of) domain experts. And you can envision that dynamic in many other fields: teaching, journalism, law, finance, etc.
My second new law of robotics further supports these ideals of fairness and democratization: robotic systems and AI should not counterfeit humanity. When they intervene in human affairs, we need to know what they are. Otherwise, we risk being manipulated by the dozens, thousands, or millions of replicants that skilled programmers (backed by wealthy interests) can deploy.
To make this more concrete: when chatbots fool the unwary into thinking that they are interacting with humans, their programmers act as counterfeiters, falsifying actual human existence to increase the status of their machines. When the counterfeiting of money reaches a critical mass, genuine currency loses value. Much the same fate lies in store for human relationships in societies that allow machines to freely mimic the emotions, speech, and appearance of humans.
LJ: Would your laws do too much to shape the development of robotics technology, including taking away the simple fun of it?
FP: I think we can have plenty of robot services (and even entertainment), while also knowing it’s robots that are performing them. It’s only the fakery that I want to stop. As for shaping technology: I plead guilty. I definitely have an industrial policy, one reflected in the first two laws and the third: that robotic systems and AI should not intensify zero-sum arms races. The most obvious application here is for the military: escalating investment in killer robots can only end in the chaos of slaughter, or a frozen order guaranteed by mechanical intimidation. But there are also “everyday arms races” that waste our time and money, such as when people feel they have to invest in the newest tech, or reveal more about themselves to AI-driven decision makers, just to beat out competitors. Banks want us to reveal more and more about ourselves to get better terms on loans. High-frequency stock trading is an arms race for speed, driven by algorithms. When you look closely at companies such as Google or Facebook or Amazon, much of their business model is based on forcing smaller businesses to bid for attention on automated ad exchanges. They’re acting as parasitic middlemen squeezing the life out of smaller firms by setting up tech-mediated arms races for customer attention. AI policy should limit that.
These first and third new laws of robotics—promoting complementary AI and discouraging arms races—are rooted in present concerns. But I also wanted to address future technology. So the fourth new law demands that robotic systems and AI must always indicate the identity of their human controllers. That may seem like a trivial task. But it also encodes a much larger vision of technology: that any robot or drone unleashed on the world has a human controller. In my vision of the future, we never get truly autonomous AI, because any AI that is created must have some persons responsible for it, who can check any action it may take.
Some researchers pursuing “artificial general intelligence” will bristle at this rule, because they see AIs as “mind children” that will eventually become as competent and capable of free will as their own human offspring. We don’t hold parents responsible for the crimes of their children, so why hold engineers responsible for what their robots do? But there is a world of difference between an artificial simulation of human will or intelligence, and the real thing.
LJ: That difference—between artificial simulation of the human and what you call the “real thing”—raises complex and competing critical, philosophical, and ethical questions, which you confront throughout New Laws.
FP: There is an enduring glamour in artificial intelligence—a sense that it will finally enable us to take the next step on some evolutionary trajectory, a second, transhuman nature better than the first. And this point of view has a distinguished pedigree. “If the artificial is not better than the natural, to what end are all the arts of life?” John Stuart Mill once asked. “To dig, to plough, to build, to wear clothes, are direct infringements of the injunction to follow nature.” So a contemporary follower of Mill might happily prophesy ever-deeper human-machine mergers into a cyborg singularity, where eventually robots supplant us entirely. The goal is to catalog all we could ever do, divide it into routines and subroutines, and then have AIs (perhaps integrated into a robot) first simulate it, and then improve upon it.
LJ: And you present the reality of being human against this.
FP: Yes. Because so much depends on how you define “improve.” I still see normative value in the real because we need some common substrate of experience and perception to have any useful conversation about values at all. For example, in law, what would it even mean to punish an AI, bot, or robot? Maybe you shut it off, or take away the assets it controls, but—what then? Maybe when it “scores” the value of future actions, it subtracts a thousand points from whatever future action sufficiently matches what got it punished. But is there regret, remorse, or even pain? No, because all these signifying emotions are embodied thought. The same problem would afflict an AI judge—it could mete out sentences, but it would not understand the impact of such punishments, and therefore would not be legitimate. Judges bear a heavy burden, and they should do so—they are changing lives, sometimes forever, and the only way we can accept that authority is if we know it is exercised by someone with real empathy and understanding of their duties.
Of course, there are stripped down, behaviorist, and utilitarian perspectives on human cognition that see little difference between us and machines. On this account, a person really is just a calculative maximizer, a Pac-Man incarnate, racking up as many pleasure points as possible while avoiding pain. But as Nietzsche once quipped, “Man does not strive for happiness; only the Englishman does that.” In other words, a calculus of pleasures and pains (rooted in classical English utilitarianism) is just one way of thinking about human intelligence and cognition, and to my mind it’s a shallow and compromised one. So much of life does not fit into the mold of a “cost-benefit analysis,” where we can calculate the consequences of our actions as AI might do. The future is often much better framed as self-narration, where we are called upon to imagine scenarios—both personally, and as members of organizations and communities. In that way, imagination is more real than AI—it’s a more authentic expression of what makes us human than the machine learning or deep learning now driving advanced computation.
LJ: But your concern about the real isn’t only metaphysical.
FP: No. It’s also an insistence on careful and correct use of words. Think, for instance, of ads for social or “caring” robots. They are doing something for children and the elderly, but is it real socializing or real care? Care is only care if it can be abandoned (otherwise it’s a programmed simulation of care). And it seems to me that interacting with a robot, instead of people, could be a profoundly anti-social act (however much it may soothe or distract).
My final concern about the real revolves around solipsism and narcissism. A robot enthusiast might say, “Hey, my mechanical dog makes me feel better when it cuddles up to me when I get home, so what’s the problem? It makes me feel as good as a dog would, so it’s a dog.” From an efficiency perspective, the mechanical dog is a huge advance: it will never bite a stranger, incur huge vet bills, shed, or die. But I don’t think we want to recognize a person’s doting on their Aibo (a type of robot dog once marketed by Sony) as morally or ethically equivalent to pet ownership. The stakes clearly rise as we move from machine to animal.
LJ: How do we ensure that those stakes are clear? You’re going up against some strong cultural currents in the direction of technocracy, posthumanism, and economism.
FP: All true. And we’ll certainly see more examples of researchers saying, “My AI can write poetry better than humans!” or companies marketing robots as replacements for nurses. But people still ultimately decide whether to recognize algorithmic literature as worthy of study and reflection, or whether to license robots as nurses, or any of the small intermediary steps that push us in the direction of a posthuman future. That’s the reassurance: the abiding power of individuals and institutions to shape the future of technology, rather than be shaped by it.
And just to be clear: New Laws of Robotics is not an anti-technology book. It’s about the proper path of technology’s development. Even amidst the wreckage of the present, when so much seems to be going so badly for so many, AI is creating great opportunities for more accurate medical diagnoses, a wider range of supplemental lessons for schoolchildren, and safer transport. All that is to be celebrated.
Policymakers need to stop worrying about AI taking humans’ jobs, and start supporting the institutions and flows of funds that value human labor. We can uphold a culture of maintenance over disruption, of complementing human beings rather than replacing them. The future of robotics can be inclusive and democratic, reflecting the efforts and hopes of all citizens. But that will require wise reflection on the proper nature and scope of human work and technological development—something I hope my book can contribute to.