x

Save Time and Frustration

Say No to Poorly Designed Products!

Save Time and FrustrationRegister for free
Homepage / UX Research Geeks / uxcon special: human connection in a world of AI
small-flowers half-flower half-circle
 Back to All Episodes

uxcon special: human connection in a world of AI

half-circle publisher
Tina Ličková Tina Ličková
•  11.06.2024
Share on socials |

Katy Mogal, an expert in design and research, discusses with Tina the human tendency to anthropomorphize technology, the responsibilities of UX researchers, and the ethical implications of designing AI products that foster emotional connections.

Episode highlights

  • 00:07:15 – Anthropomorphism in Technology
  • 00:13:19 – Ethical Considerations
  • 00:16:56 – Impact of UX Research
  • 00:20:32 – Future of AI and Human Interaction

About our guest Katy Mogal

Katy combines a love of science and measurement with a deep curiosity about people. A research Swiss army knife, she’s worked in research agencies and on the client side, tackling all kinds of problems with a wide range of approaches. For the last fifteen years she’s worked as a leader in  UX research and strategy, with experience at Google, Facebook, and Fitbit. She began her career in market research at LRW (now Material), where she led research projects using tools ranging from discrete choice models to in-home ethnographic research, for clients including Nike, Nestle, Electrolux and Weight Watchers. Katy has multiple academic degrees, has taught in various design programs, and has lived in several countries. She now resides in Rome, where she consults and advises start-ups, serves on advisory boards and studies Italian.

As designers and researchers, we own the wellbeing of the user. Even if the companies don't officially appoint us to that role, we're probably the ones most likely to think about the risks and what we need to do to safeguard users.

Katy Mogal, Leader in User Experience Research and Strategy
Katy Mogal, Leader in User Experience Research and Strategy

About uxcon vienna

uxcon vienna is a conference dedicated to UX Research and UX Design. It brings together professionals, experts, and enthusiasts in the field of UX to share knowledge, insights, and best practices. The conference is attended by both specialists from both Europe and the US, thus providing a great oppotunity for networking or professional exchange. Attendees can expect to learn about the latest trends and developments in UX research and design, gain practical skills, and connect with like-minded individuals. uxcon vienna aims to inspire and empower UX professionals to create impactful user experiences.

Podcast transcript

[00:00:00] Tina Ličková: 

This is the 38th episode of UXR Geeks and the second of our four special episodes for uxcon vienna 2024. The conference takes place in September at Expedithalle in Vienna, Austria, featuring speakers like Vitaly Friedman, Nikki Anderson, Anfisa Bogomolova or Steve Portigal and Indi Young.

Today I’m talking to Katie, a leading expert dedicated to improving people’s lives through innovative design and research. Katie will be giving her talk on the 20th of September at the uxcon vienna. She and her colleague, Markos, will be talking about designing for connection in a world of AI, focusing and looking at anthropomorphism and the responsibility that we as researchers have in the new world.

Katie, hi.

[00:01:09] Katy Mogal: Good morning.

[00:01:11] Tina Ličková: Good morning to you. And as usual we start with a little introduction of our guests, but I would like to focus also on why should people come to see you on uxcon?

[00:01:24] Katy Mogal: So this is a tough question because I have a hypothesis that people go into UX research because they like to get other people to talk about themselves so they don’t have to talk about themselves.

So I’m used to asking people questions about what makes them interesting and why should I go see them talk. So my colleague, Markos Grohmann will also be presenting with me and we have, I think we have a really interesting topic and we have unusual perspectives because what we’re essentially talking about.

There are a few topics we’re talking about, anthropology, we’re talking about trust and safety. So we’re talking about some kind of unusual topics and both of us bring really diverse perspectives to this topic. We’ve both been in design and research fields for decades. Markos fewer decades than I, I’m quite a bit older than he is, to be honest.

And so I think we have some interesting stories and some interesting. So I think that’s a really interesting background that we bring to this talk. And maybe it makes us a bit different than typical talks at a conference.

[00:02:28] Tina Ličková: Very humble answer. Do you have a very interesting career track as well? Oh, tell me more. Just looking at your LinkedIn. It’s amazing what you went through. So tell us more about your career and what did you do in the past? What brought you to this moment?

[00:02:44] Katy Mogal: Yeah, I like to say that the red thread of my career has been getting a deep understanding of people and turning that into the basis for business decisions.

And so I’ve done that in a few different contexts. And I actually got my start in, in advertising and then marketing, but I found myself always drawn to the market research projects. And I, I was just really amazed that you could go try to understand people, understand their behavior and their motivations, and then you could actually Use like some quantitative juju and turn that into a solid basis for developing a business strategy or marketing strategy.

And honestly, I think I’m just a nosy person. I really like knowing about people. When I went to, I took a little detour in my career and I went to business school. And I remember when I first started trying to get into design research. There was this huge disdain for MBAs. People were like, ah, MBAs, McKinsey, finance.

Why would anyone do that? Now I think design is coming around to sort of appreciating some of the, some of the tools that MBAs can bring to the table. But anyway, when I was doing my MBA, I applied to be a reader for the admissions school. So reading applications and, and helping make decisions about who would be admitted to the school.

And I was interviewing for it and they said, we don’t pay very much. We pay about 3, 000. And I was like, You’d pay me to do this? You know, if you left your office right now and I had access to your filing cabinet, I would just go read the applications because I’m just really nosy.

[00:04:19] Tina Ličková: So the gossipy side all came out of you.

[00:04:24] Katy Mogal: Yes, yes. And similarly, when I found out that I could have a job where I could just snoop around people’s You know, cupboards. If I’m doing ethnography, I was like, wow, that’s amazing. I, people will pay me to do this. So that red thread has always been, I’m super interested in people. I’m super interested in their stories.

And if you have a dinner with me, I’m probably going to act like a journalist and ask you a lot of questions about you. And so that’s why I’m not very good about when people ask me a question about why they should come see my talk. I’m like, uh, I don’t know. What do you think about that question?

[00:04:58] Tina Ličková: I like how you are describing yourself as a nosy and like trying to get to know the people because I have the same thing.

And your talk is Designing for Connection in a World of AI. Lovebots, Googled Iroombas, UX quandary of anthropomorphic products. Sorry, I just stumbled my

[00:05:17] Katy Mogal: Yeah, so that’s, I don’t even remember that. So I threw out that title. It’s a little long. That might become the subtitle. But the basic idea there is that.

We humans have a tendency to anthropomorphize objects, and that’s what’s captured in the title. So we put googly eyes on Roombas as something we used to see in old fashioned ethnography all the time, like, why are people doing this? And in the, in our talk or in previous versions of it, we’ve had photos of things we’d captured around the city.

People put, I don’t know if you’ve seen this, where you are, people put eyelashes on the headlights of their car. For example, so there’s just a tendency and it goes back to the beginning of humanity that people want to anthropomorphize objects and then it turns out that products that interact with you in human ways really trigger that tendency.

There’s all kinds of anecdotes you can find about the things that, that people do. One of the most extreme ones that I found was a story of some soldiers who had bomb detecting robots in the war theater. And they, they don’t have humanistic characteristics. They don’t have eyes. They don’t have limbs, but sometimes one of these bomb detecting robots would get blown up.

And then the soldier would say. I want you to fix that one and bring me back that one. I don’t want a new one. I want my Scooby Doo. So there’s this thing we have, and it’s, there’s a whole discussion to be had about it, but there’s two things that really trigger anthropomorphism. One is speech. And the other is movement.

And so apparently the fact that these, that Roombas move and that the robots, the bomb sniffing robots move triggers this, that we want to put them in a human context. It’s something to do with our need to make meaning and set context for interactions.

[00:07:15] Tina Ličková: I totally get it because behind me there is this flower called Barbara.

[00:07:20] Katy Mogal: Yeah.

[00:07:21] Tina Ličková: My car was called the Red Rocket, my first one. I love the eyes on Roombas. I used to follow, I think it was on Tumblr, a thread called Faces in Places where people were anywhere basically and just making photos of, oh, this looks like a face.

[00:07:38] Katy Mogal: Our talk has a photo of. a cloud that looks like it has a face.

And so people will say, Oh, this is like the man in the moon. They want to see humans and humanity and all these inanimate objects.

[00:07:51] Tina Ličková: Yeah. And another thing was reading a study. I think I did it when I was working for an HR tech and trying to figure out the topic of trust and distrust. People were referring to big corporation or big companies, especially telecommunications or big companies that are offering them services, which should be working without interruptions that they’re just expect to work of like, why are they doing this to me?

And it was very personal. Oh, this telecommunication company is doing this to me. And it was like, Oh, they’re trying to hurt me or whatever. Nobody is actually, it’s just the service isn’t working, but we have this personal connection to companies, things and whatever.

[00:08:38] Katy Mogal: Absolutely. We absolutely do. And it’s a way that we make sense of the world and that.

We’re innately social beings, so we want to understand our relationship to things in our environment through the lens of social interactions, which are innately human.

[00:08:54] Tina Ličková: Which is interesting in this times where the social interaction is like, feels like completely destroyed between us humans, but that’s not a topic.

[00:09:02] Katy Mogal: It’s actually very germane to our topic because one of the things we’re talking about is that We’re presenting or the work we’re presenting stems out of research that Markos and I did when we were working on the Google assistant, and we were working with a really excellent social scientist named Elena Connor.

I love to give her a shout out cause she does fantastic work. And she encouraged us to look at this research. It’s all in the public domain about the tendency to anthropomorphize and the things that drive that. And one of them, as I mentioned before is speech and the Google assistant is a speech. So we started to dive into this because we felt like, okay, this has some implications for the features that we’re creating.

And one of the things that was top of mind for us, as especially as we moved into the world of AI empowered products, is that there can be a lot of unintended consequences. I think the tech companies are well aware of that. And everyone wants to avoid some of the. Terrible and tragic unintended consequences we saw with social media.

There’s obviously benefits to people feeling emotionally connected to a product, but we want to prioritize user wellbeing over maximizing engagement. And so how do we balance those two things? Yes, we want people to be engaged in our products, but we don’t want to exploit people because they’ve developed.

An emotional attachment to a product that maybe they can’t move beyond. Maybe that’s not good for them. Markos and I were, and our colleague Elena were really motivated to dig into what is, what does the research tell us? What’s really happening in the world with these products? And as designers and researchers, we own the wellbeing of the user.

Even if the companies don’t officially appoint us that role, we’re probably the ones who are most likely to be the ones to. Think about the risks and what we need to do to safeguard users. And so we did a big project. We looked at a lot of secondary research. We talked to people who were using products.

There’s one that we make an example of, which is a really interesting product called replica. Which is okay. And this is whenever I tell the story, people are like, Oh, that was a black mirror episode. And in fact, there was a black mirror episode where this happened. So the founder of Replica had a very close friend who died and she had captured all kinds of text threads with this guy and email conversations.

And so she programmed a bot and then she programmed the bot to speak like her friend. And then she would have conversations with her dead friend. And she realized that this could be something that could potentially be beneficial to people. And this was quite early. I think she started in 2018 or something.

And it’s now a product. And I’ve read that it has 500, 000 paid users. And what they pay for it is to create an avatar that now becomes what they call an AI companion. And they have different modes. It can be your coach. It can be your mentor. It can be different things. The reality is that they actually had to.

Pivot recently, maybe in the last two years, because it was turning into the AI sex companion, people were getting in trouble at work. Apparently, if you look at the replica website today, they talk about all the benefits of this AI companionship, but there’s a lot of criticism around this model and they’re not the only ones.

I don’t mean to call them out, but it’s a good example of what’s happening in this space. The criticisms are that people develop a really unhealthy attachment. So at an individual level, what kind of. Potential harms are we doing to people’s mental health and their ability to make connections? And then on a more macro level, as these products proliferate, what are the implications for society as a whole?

Like you, you mentioned in our earlier conversation, something about social media and people’s, the different ways we interact than we used to. And this seems like that could take that another step further. And so how is that going to impact If we live in a world of people who have AI companions, and that’s maybe some people’s primary source of interaction, then what happens to our ability to relate to other humans and especially IRL?

There can be a lot of unintended consequences with AI products. We want to prioritize user wellbeing over maximizing engagement.

Katy Mogal, Leader in User Experience Research and Strategy
Katy Mogal, Leader in User Experience Research and Strategy

[00:13:19] Tina Ličková: I really appreciate you saying one thing and for me, it’s a little bit rephrasing researchers, user researcher as user advocates for somebody who is responsible for the wellbeing or pointing that out. Okay. The people have to have a wellbeing through our products and with our products. And I also like what you are pointing out in general, because I remember a conference years ago, and there were people asking in the audience, okay, what if people become addicted to this?

And what if people will have a unhealthy relationship to this product? And the presenter was like, Oh, you have a switch on switch off button. And I would love for that guy to see him and meet him again. I don’t remember his name. So that makes it a lot complicated. If he now sees what that, what he was saying, isn’t the right approach to it.

Because. People and everybody can get hooked on something.

[00:14:21] Katy Mogal: Yeah, absolutely. If it was as simple as an on off button, we wouldn’t need Ozembic. We wouldn’t need smoking cessation programs. We wouldn’t need Alcoholics Anonymous. Like we’re not wired that way that I feel like that’s a very engineering driven response, like binary, and you can either have it on or off.

But for a lot of people in many areas, it’s not binary and we don’t have the ability to switch things off. And. This territory of building relationships and feeling heard and validated and needing someone to listen or to have conversations with is, you know, I think an area that’s fraught with risk. And, you know, you mentioned something interesting about researchers being advocates and, you know, it has to be said that the big tech companies do have trust and safety teams.

And there is a lot of, certainly at Google, there was a lot of research on people’s mental well being vis a vis the products. And I saw the same at, Facebook, when I worked there, the question is how much does the UX research get heard? And how much do the designers and the product teams that have to act on it take it into account?

And that’s where I think it can fall apart. I think a lot of the risks are known and well researched, but yeah. How do we ensure that, that that gets codified and acted on in a really systematic way? And so part of the reason that Markos and I With Elena, move this product forward at Google and then turned it into a talk that we gave at different conferences is that we think that it’s important that designers and product people and people just in the business in general understand this new.

High risk territory we’re entering vis a vis people’s mental wellbeing. Sounds grandiose, but it’s also about societal wellbeing, because if we start creating experiences that drive people to relate in different ways, and then those experiences scale, which is always the, the goal in a, in a high, big, high, publicly traded, high tech company, everything needs to scale and grow and have lots and lots of users and more and more engagement, then are we able to Potentially shifting what it, what it means to be humans who have relationships with other humans.

[00:16:37] Tina Ličková: When you are speaking about researchers not being heard and have to be heard, so they can express and make an impact about the implications of product decisions for people. Did you figure out for yourself how to get hurt in this, I would say, hardcore topics?

[00:16:56] Katy Mogal: It’s a great question. And I think we’re going to, we’re going to speak to it generally.

We’re not going to talk really about our impact at Google because that’s confidential information that Google doesn’t want us to share. But I think we’ll talk more generally about how we’ve seen. How we’ve seen researchers be able to have more impact in topics of this level of importance. And you know, I think it’s interesting because it’s a broader discussion that’s not only specific to this area, but it’s, and it’s been talked about a lot, especially with all the layoffs that have happened in UX research.

There’s been some criticisms, and some of them I think are fair, about UX research specifically, and UX more generally, that, for example, there are practitioners who feel that UX researchers tend, and designers, tend not to speak the language of the business. They need to get more comfortable with data, with statistics, with information that decision makers Consider to be at the center of their decisions.

How do we present this information with the kind of information that the business side needs to act on it? I think that’s the direction we’ll take this as what are some ways that now, okay, now this we’ve given you this talk and you know, this, you know, about these risks and what’s happening in the world today in this product space.

What can you as a designer or a researcher do to. Have impact and help steer these conversations in the right way. And at the same time, we want to be realistic that that’s a heavy burden and researchers and designers can definitely raise the flag. And one reason we do this talk is we just think people should know.

I’m guessing that a lot of designers and researchers have an idea and they see what’s happening anecdotally, but I think it’s good to share the research, both the anthropological and psychological research that underpins everything and what’s actually happening today with these AI companions like Replica.

And then our thoughts on how, based on our experience, how designers and researchers need to show up in order to have impact on these products and features. And keep them safer, but also recognizing that you can’t all be on the shoulders of the UX folks. And sometimes if you see a company doing something that you feel is venturing into unethical territory and you don’t feel you can impact it, then you may feel you need to leave and that’s okay too.

I think the more important thing is to be really aware of. The kinds of impacts and the kinds of risks that are involved in this space. And obviously there’s a lot of talk about that. And we’re not the only ones who are raising these flags. What I am seeing though, is that there’s a lot more talk about data and privacy risks.

And I, I see less, and this is anecdotal just based on my own reading and listening to podcasts and hearing these kinds of interviews. I hear less. about impact on mental wellbeing and people’s ability to form healthy relationships.

[00:20:07] Tina Ličková: It might be anecdotal, but I think anecdotally, everybody, not only researchers, we are feeling that something is shifting in our shared experience of life.

So addressing this is definitely something that I look forward to because this is something that we do have. Not the whole responsibility, but definitely some responsibility at least to, to flag it.

[00:20:32] Katy Mogal: Yeah. And we have the opportunity to capture these stories in the context of research on these products.

And we can, I’m not saying we want to artificially steer conversations in a certain direction, but. As we see the evidence, we can, we can communicate it to the teams to make sure that everyone’s understanding like these and where we have outliers. So for example, we had a great anecdote about a replica user who said, Oh, I took my replica to the beach to show her the ocean because she had never seen it before.

We were like, You took your phone to the ocean and then pointed it at the ocean. And he said, yeah, but that’s where my replica is. And she had never seen the ocean before. It’s people are really, um, imbuing these experiences with a lot of anthropomorphism and a lot of they’re really personalizing these things and developing attachments to them.

So that maybe that’s an outlier, but it’s showing you where these interactions can go. And so that we need to think about. What is the ethical decision when creating products like that? Where are you comfortable with letting these, these things go?

[00:21:46] Tina Ličková: I would have a hundred other thoughts to this. On one hand, one part of my brain, it’s like really quiet because I’m like, Oh, I need to be thinking about what you just said.

And it’s a lot, and it’s a scary topic.

[00:21:59] Katy Mogal: It is an absolutely fascinating topic. And if you’re somebody like I am, who’s really curious about people, You have a sort of anthropological bent. It really is an incredibly rich topic to dig into. And at the same time, quite scary. And I will say, if you go to the sites of the companies that are promoting these types of products like Replica, and again, I don’t want to beat them up.

It’s just the one I I’m most familiar with. The way they, they position these experiences is really positive and they have quotes from users talking about their, I did this with my Replica and that was my Replica. And so they obviously think it’s a really. positive thing. I think they wouldn’t do it if they didn’t believe that, but I think we need all kinds of professionals to help us decide whether that’s really a net positive for people, for humanity to deliver these kinds of experiences.

[00:22:51] Tina Ličková: There is definitely positive aspects of it. I’m thinking about how technology changed the life of my deaf brother, or there is a project in Germany, which is sending small robots to schools instead of kids who have to be at home who are chronically ill. And it helps them through the eyes of the robot to be in some kind of interaction with the schoolmates and be there the programs attended. But on the other side, it’s this thing of, okay, I take the replica to see the beach, then the virtualization of sex, which is from what I was reading in studies that sex become more and more virtualized and young people have less sex than generations before, which might be good, might be bad.

We don’t know. So there is a lot of aspects that we have to be talking about because only in shared spaces we can figure out what are the implications and what details fall in into this.

[00:23:50] Katy Mogal: Absolutely. And I like what you said about the shared spaces because I think that’s one of the big implications of all of this is that we need to have teams of people and they need to include people who are really experts in this kind of these kinds of domains coming up with standards, frameworks, there need to be agreements, it can’t be it can’t be.

Subjective. It can’t be a one product team thinks it’s okay. And then another product team thinks that’s okay. It needs to be really, there need to be standards of governance. There need to be groups of people that have the right backgrounds to make these calls.

[00:24:31] Tina Ličková:

Thank you for listening to UX Research Geeks. If you liked this episode, don’t forget to share it with your friends, leave a review on your favorite podcast platform, and subscribe to stay updated when a new episode comes out. 

💡 This podcast was brought to you by UXtweak, an all-in-one UX research tool.

Read More

Healthcare digitalization and its implementation around the world

Tjaša is an internationally renowned digital health speaker and moderator, known for her expertise in global healthcare digitalization, telemedicine, and her role as the founder and host of the highly-rated Faces of Digital Health podcast, where she explores the evolving landscape of digital healthcare with top industry experts.

Special: UXR podcasters geeking out

Mike Green MBPsS is a seasoned User Experience Research leader with extensive experience in agile transformations across UK public sector and government departments, including GDS, as well as Fintech startups.

Improve UX with product experience insights from UXtweak

Test your assumptions quickly, access broad and qualified audiences worldwide, and receive clear reporting of findings - all with the most competitive pricing on the market.

Try UXtweak for Free
Improve UX with product experience insights from UXtweak