Site icon Storythings

Ask an Expert about technology ethics:
a response to Robot Poet

By Casey Fiesler

We’ve been fans of Casey Fiesler’s work for a long time – she wrote articles about technology ethics for How We Get To Next, and teaches the subject as an Assistant Professor at the University of Colorado in Boulder. We were delighted when she agreed to write a response to Robot Poet, a beautiful story of human-computer interaction by Edmundo Paz-Soldán, also a Cornell professor. Artificial intelligence is much-discussed and often reviled; the ethics of using it in settings with humans inspires heated debates. Casey’s knowledge really adds to the themes explored in the story. We call this format ‘Ask an Expert’. It’s like an interview, but with a particular focus on explaining a specific theme. 

If you like Edmundo’s story and Casey’s interview, take a look at these graphic comics we created for Nesta’s Centre for Collective Intelligence Design, which took some speculative technology-enabled scenarios from their Future of Minds and Machines report to a visual setting. It’s the kind of thing we haven’t seen enough of working in media and communications (some of us are big comic geeks!). So if you’d like to chat about using comics to talk about artificial intelligence, machine learning, or anything fun like that (!), drop us a line.

1.  A lot of your research and teaching is in the area of technology ethics. Could you explain what this is and how you think it’s explored in Robot Poet?

 

Many of our fears about technology, whether as expressed in science fiction like this story or just through browsing today’s news, are rooted in malfunctions big and small. What if my phone sends my text to the wrong person? What if my online bank account isn’t secure enough? What if misinformation propagating across this social media platform contributes to a public health crisis? We want to make sure that technology does more good than harm – for individuals, groups, and society as a whole. Technology holds a great deal of power, as do the people who build it. This story explores one example of a malfunction that might do harm, and also the proper response to it. And on a larger scale, the people building and maintaining phones, banking systems and social media platforms, are having to make decisions every day that could have huge positive or negative consequences – but it is sometimes difficult to know exactly what those consequences will be. Ethically, perhaps they should be thinking about possible futures in the same way that science fiction writers do. 

2. At one point in the story, a character says ‘humanity is for humans.’ The theme of humanity vs inhumanity runs throughout the story on many levels. What implications do you think automatons have for humanity and our current social systems? Are we prepared for such large-scale change? 

 

Science fiction has given us tales of conscious, self-aware robots (both good and bad) for many, many years. Films like I, Robot and The Terminator have shaped discussions of AI ethics in arguably too forward-thinking ways—simply because right now there are a lot of very pressing issues surrounding AI that need our attention before we start preparing for the robot war. However, these current issues have our own humanity at their core; for example, how do we surface and deal with potential large-scale societal harms like job loss due to automation or racial bias in AI decision-making? 

While robot-police might not be on the horizon, one of the most pressing issues in technology ethics right now is law enforcement uses of technology like facial recognition, crime forecasting and recidivism prediction. Unfortunately, there are significant known biases underlying many of these technologies.

3. Ethically-speaking, how do you feel about AI being used in circumstances such as policing?

 

This story provides us with one potential future of AI: what if the police were robots? However, we don’t have to look to the far future for the implications of AI being used in law enforcement. While robot-police might not be on the horizon, one of the most pressing issues in technology ethics right now is law enforcement uses of technology like facial recognition, crime forecasting and recidivism prediction. Unfortunately, there are significant known biases underlying many of these technologies. Like Maturana, they are imperfect—but those imperfections are often not acknowledged. We have to be extremely careful about any uses of AI in high-stakes contexts like policing and medicine, because bias as an imperfection means that mistakes will systematically and disproportionately impact marginalized groups.

…one of the most interesting lessons of this story is that the same thing that humanises Maturana also highlights his imperfections: ‘Sometimes he struggled to understand that one thing could mean another.’

4. One of the most beautiful parts of this story, we think, is the way language is described as being something Maturana, the robot poet, couldn’t quite get his head around. It humanises him. What are your thoughts around the ethics of making a robot more approachable so that they become more acceptable to humans?

 

There is an entire field of research called human-robot interaction that considers just these kinds of questions. Interestingly, we know from studies of the ‘uncanny valley’ that robots that look too human can actually evoke a negative emotional response. With this in mind, humans might actually be more comfortable around robots that are clearly robots. Though I also think that one of the most interesting lessons of this story is that the same thing that humanises Maturana also highlights his imperfections: ‘Sometimes he struggled to understand that one thing could mean another.’ The example above of AI bias in policing highlights the importance of knowing that a ‘robot’ (in this case, AI via algorithms) can also be flawed, particularly when it comes to their difficulty in dealing with context—and high-stakes decisions are another time when it is important to know that robots make mistakes, too.

Reading stories like this one will give us all ideas about the possibilities of the future – and we can think about where we want it to go, whether we’re the ones building the robots or the ones living with them.

5. What steps do you think we need to take over the next 10 years to be in a position where the existence of automatons, as Paz Soldán explores in the story, is possible? Would you want it to be possible?!

 

Like most technologies, robots have the potential to be both incredibly helpful and incredibly harmful to society. I think the right question here isn’t whether they should exist at all, but how they should be designed and how they should be used. Reading stories like this one will give us all ideas about the possibilities of the future – and we can think about where we want it to go, whether we’re the ones building the robots or the ones living with them.

Casey Fiesler is an assistant professor in Information Science (and Computer Science by courtesy) at University of Colorado Boulder. She researches and teaches in the areas of technology ethics, internet law and policy, and online communities. Her work on research ethics for data science, ethics education in computing, and broadening participation in computing is supported by the National Science Foundation, and she is a recipient of the NSF CAREER Award. Also a public scholar, she is a frequent commentator and speaker on topics of technology ethics and policy, as well as women in STEM (including consulting with Mattel on their computing-related Barbies). Her research has been covered everywhere from The New York Times to Teen Vogue, but she’s most proud of her TikToks. She holds a PhD in Human-Centered Computing from Georgia Tech and a JD from Vanderbilt Law School.

Exit mobile version