Issue 02 / Human Futures

AI as a Free Space for Personal Exploration

An Interview with Human-Computer Interaction Researcher Minha Lee.

What role do stories have in technology? And how do we design technology with ethics in mind? We sat down with human-computer interaction and ethics researcher Minha Lee of Eindhoven University of Technology in the Netherlands to talk about the quickly evolving relationship between humans and machines—with an emphasis on how machines can be used to stoke humanity and build more dynamic emotional lives. We talked with Lee about how artificial agents can positively interact and strengthen the human world.

Jason Zabel

You research the relationship between people and machines. You’ve talked about using machines to create narrative-centered designs for people. Can you say a bit more about that, and why this is so important for people?

Minha Lee

We believe in real emotions in a fake world, but not fake emotions in a real world. Often, well-crafted stories contain characters with rich emotional lives that we believe in. Conversely, when we feel that someone’s emotional displays feel fake in our everyday lives, we try to decipher and seek out their authentic emotions and intentions. We search for what “feels real” in artificial and everyday worlds because we are storytelling, story-believing creatures.

While this might seem obvious, not enough care goes into designing artificial agents that can reveal an emotional inner world that we can believe in. Authenticity of emotions drive all narratives, be it for our self-narrative or narratives of real or imagined beings. Phoebe Sengers, researcher at Cornell Tech, wrote about how artificial agents need narrative intelligence because “in a narrative, what actually happens matters less than what the actors feel or think about what has happened. Fundamentally, people want to know not just what happened but why it happened.”

JZ

Technology as metaphor plays a big role in your work. You’ve said that metaphor allows us to remove ourselves from the problems we’re facing. Can you talk a bit about the role of metaphor in your work, and how that relates to machines?

ML

Even at the UI level, we have metaphors like a “bin” for discarding files we do not need with a literal icon of a bin or a trash can. Beyond the interface, metaphors have been part of how AI has been envisioned since the early days (Sherry Turkle summarized this well in her 1995 book, Life on the Screen). There was the information processing model vs. the emergent model of AI in 1960s. As the information processing model became more prominent, metaphors on how our intelligence can be likened to a computer became normalized.

Take for example, “memory.” The metaphor that a computer has “memory” does not alarm anyone these days. The observation that a computer with “memory” can “process” something sounds trivial because we are now accustomed to talking about computers in that way. We forget that the very language we use to describe how machines “think” is metaphorical. Conversely, we are uncomfortable with statements like: a computer with “passion” can “feel” something. Metaphors involving emotions feel too novel, too unlikely, at this point in time.

And feelings seem like the one territory that truly distinguishes us from machines, which is why people may feel uncomfortable, according to research we have been doing at Eindhoven University of Technology. However, there can be an alternative. Our project on Vincent, a chatbot for self-compassion, showed that Vincent and its fictional struggles as a chatbot, e.g., being embarrassed about arriving late at an IP address, were metaphorical reminders of common human behaviors, with a fictional twist. We never spoon-fed the metaphor like saying that a chatbot can “feel embarrassed.” Through this “show not tell” approach, we noticed that people told Vincent things like “be proud of the chatbot that you are!” By saying encouraging and even compassionate messages to a mere bot, people’s self-compassion increased. Vincent is a metaphor to help us learn more about human behavior, and about how people themselves want to be treated.

JZ

Why do we humanize some machines? What causes us to do that? How can technology become trustworthy?

ML

We have been seeing faces in clouds for millennia. We have seen faces of gods in nature and we prayed for good harvest, rain and prosperity. We humanize things so that they fit into our world so that we can make sense of what we cannot understand. Yet all the complexity we attribute to things, including machines, is a metaphor of our own.

Herbert Simon, an AI scholar, has a story about an ant crossing sand dunes. The ant looks like it knows where it is going, moving with intention in its own intelligent way. However, it is more that the environment that the ant is crossing is complex (The Sciences of the Artificial, 1969). The ant needs a landscape to appear intelligent. Technology needs our humanity to appear complex.

Technology becomes worthy of our trust when we stop seeing it as autonomously complex. In some ways, this means taking back some of the easily given trust, with a simple example of rethinking what it means to consent to cookies online. It is more that we are consenting to frictionless interaction, not giving trust. We want to autonomously, seamlessly surf the web, but we do not want technology to autonomously, seamlessly surf our history as traces of our humanity online. Overall, the concern is not that machines will become too human-like, but that humans may become too machine-like to notice (Reclaiming Conversation by Sherry Turkle and Minima Moralia by Theodor Adorno). We should consider ways to reclaim some of the autonomy and complexity of being human.

JZ

You have a very positive outlook about the relationship between machines and people. What considerations do you think we need to keep top of mind to ensure machine learning is good for people—not exploitive, or not a replacement for our humanity?

ML

We can no longer expect people to understand what their data is representing. Information used to be one step removed from original data; now, with big data that are required for machine learning to work, there are too many levels of interpretation that people don't know what to make sense of. This means that, at a meta level, people have no relationship at all to the data they are generating and seeing we are often fed, e.g., the output of machine learning, only.

With this, there are many unwanted consequences, like when black men’s facial expressions are labeled as “angry” more so than white men’s faces by the algorithm (Lauren Rhue, 2018). When machines learn about gender or race through computer vision, we lose internal factors on who we identify as and we get labeled by computers in a biased way (Scheuerman, M. K., Wade, K., Lustig, C., & Brubaker, J. R., 2020).

We get reduced, even dehumanized, by human-made systems. One option is to ban these machine learning systems completely. Another path is to make sure that we diversify our dataset. Yet the sobering catch-22 is unhelpful. Either contribute to diversifying the dataset when one prefers to opt out, or live with being mislabeled. The consequence of being mislabeled can be life-threatening, e.g., incarceration due to being mislabeled as a threat, especially for marginalized people. I don’t have a solution for this problem. But the scope of the problem for me is more micro-level, conversational interactions than macro-level interventions.

JZ

How can we move toward a world where technology is truly more of an exploratory space for people? How can technology help trigger us to reveal more aspects of who we are? Where is this currently happening, or what’s particularly interesting right now in this space?

ML

There's research that shows people are more willing to disclose sensitive information to a virtual therapist — a fake machine — because they think it's not going to judge them (Lucas, G. M., Gratch, J., King, A., & Morency, L. P., 2014). People feel safer to try out different versions of who they can be. Technology can then be good because it disarms people [in a safe way] at an interpersonal, micro-level of interaction. It's harder for most people to be in the care-receiving role because we want to show our independence. We really want to show that we're not that vulnerable, but vulnerable in a calculated way. Could we see technology as the vulnerable other, especially if people are afraid of appearing vulnerable to other people?

A way to think about technologies that enter our lives in a very intimate way (like Alexa listening to you sleep) is to determine the exact point at which someone or something is attributed a moral trait like trustworthiness for us to be okay with being vulnerable. Socially driven concepts like trust require interaction. One cannot simply attribute trustworthiness to another without the other having earned it in some way. Technology has a chance to become this trustworthy agent through interactions. Those humanized attributes of a thing, be it trustworthiness or compassion, have no reason if interactions do not exist.

Going forward, what will be interesting is dissecting how we don't moralize technology in the same way [as we moralize humans]. An example is that with technology, people don't blame it, but are more willing to punish it. There is a responsibility gap when we don't really know who to blame. When too many people are accountable, nobody is accountable. It's easier for people emotionally and cognitively to find one entity to place blame on, but because technology cannot understand blame, perhaps punishment is the way blame is distributed. That's how you take ownership of a negative emotion you might have. In these ways, I am curious about how our social and moral rituals will change through technology.

Cited literature

Contributors

In collaboration with

Athena is a playground for uncommon wisdom. Created by Zeus Jones.