Discover more from The UXR's Annotations
Artificial Intelligence is a social actor: Implications for the UX of AI tools
We’re in the midst of an AI wave with rapid developments in artificial intelligence technology and even faster adoption of AI features and tools. AI has become a mainstream technology integrated into many of our daily lives.
It is an important time for anyone who designs these experiences to familiarize themselves with the theories of social responses to intelligent technology. Decades of human-computer interaction research show that intelligent technologies, like AI, are perceived as human-like and trigger social responses. This issue will brief you on the body of HCI research on social responses to intelligent technology and explore the implications this has for UX practitioners working on AI experiences.
Research shows that we respond socially to intelligent technologies
Have you ever used the words “please” or “thank you” when asking ChatGPT to do something? It seems strange that we would extend such social courtesies to a non-human, doesn’t it?
Clifford Nass, a prominent Human-Computer Interaction researcher, is widely recognized for his theories on social responses to technology. Although some of his studies date back almost 30 years, Nass’s research was forward-thinking, and even more applicable today than at the time of publication.
Through numerous experiments conducted in the 1990s & early 2000s, Nass demonstrated that users interact with computers on a social level, perceiving them to be more human-like than the physical environment or less intelligent devices (such as a desk, bicycle, or clock), albeit not quite human.
Nass’s research showed that these social interactions are pervasive; we apply most social norms, including politeness and reciprocity, engendering, attributing personality, and notions of “self vs. other”, to our interactions with computers.
Here are six key points you should know from Nass’s work:
“Social processes” are unconscious mental patterns humans developed to effectively interact with each other.
We apply these same social processes to our interactions with computers.
Our social responses to computers happen automatically, without conscious thought.
The implication is not that we mistake computers for humans, but that computers evoke social responses.
Computers evoke social responses because they appear to have some agency, autonomy, or sentience. These characteristics make us view them as human-like.
Because we view computers as human-like social entities, all social psychology findings are relevant and applicable to human-computer interaction.
Nass’s “Computers are Social Actors” theory isn’t strictly limited to computers; this theory extends to other intelligent technologies1. For example, in research from HCI's sister field Human-Robot Interaction (HRI), we see that robots are perceived as human-like, and humans interact with them socially.
In short, we unconsciously and automatically use social processes when interacting with technology that sufficiently demonstrates agency, autonomy, or sentience. This includes computers, robots, and, of course, Artificial Intelligence.
4 implications for the UX of AI
So, we know that AI will evoke human users’ social responses. What does this mean for those who design AI technologies? Here are four initial implications:
1. Social responses to AI will bring a new dimension to studying user experience
Compared to a typical user interface, AI experiences demonstrate intelligence and autonomy, so they will elicit social responses from users and will be perceived (unconsciously) as human-like.
These social responses will be a key consideration for UX practitioners working on AI tools and features. Much of UX Research focuses on usability, but practitioners will have to develop an understanding of the social aspects of User-AI-interactions and consider them in the design process in order to develop effective user experiences. Further, we will need to develop specific research methods and/or approaches to understanding social responses to AI tools.
2. We can influence the degree of social response
Our social response is a function of the degree that a computer/robot/AI appears to be autonomous or sentient. This implies that practitioners can influence the level of social response that users have towards their AI experiences.
It's worth noting that any AI tool will prompt at least some level of social response, but the strength of that response can be affected. Practitioners can manipulate their design to enhance or reduce perceptions of autonomy to design for a greater or lesser social response, respectively. UX designers and researchers of AI tools will need to be knowledgeable about the factors that influence these perceptions.
3. Team-like dynamics will form
Another key implication is that AI, as a human-like social actor, can be perceived as a teammate. Research has shown that in collaborative, goal-oriented settings robots/computers/AI take on teammate qualities in the eyes of their human counterparts.
An important thing to note is that these intelligent technologies are often seen as a sort of “weak social actor” or a lesser teammate. This lesser status introduces a host of questions:
Will humans trust their AI “teammates?”
Will they choose to accept suggestions, information, or output provided by AI?
Who will be blamed when something goes wrong?
To fully understand users’ attitudes and behaviors, UXRs may need to perform secondary and discovery research into these team dynamics that users form with the tools they use.
4. Our design choices will influence users’ social interactions with AI tools
Finally, design choices will play a crucial (but unconscious) role in shaping users’ social perceptions of, and subsequent interactions with, AI tools. Gender, for example, is a highly consequential design choice, and requires careful consideration. Design decisions around the appearance, sound, or other presenting features of AI systems can heavily influence perceptions including trust, autonomy, competency, and more.
Imagine you were designing an AI voice assistant.
What should it sound like?
Will it be gendered?
What will its tone, intonation, and accent be?
Each of these choices will impact your users' unconscious perceptions.
Designers of AI tools must be familiar with the tradeoffs involved in these decisions. Meanwhile, researchers must investigate the effects of design variants on users' perceptions and social responses. By understanding these factors, UXRs can create more effective, user-friendly AI tools that meet the needs of their target audiences.
The bottom line
AI technology is rapidly advancing and becoming integrated into our daily lives. As a result, it is important for practitioners designing AI experiences to understand that users interact with AI on a social level. Research shows that people respond socially to AI and unconsciously perceive it as human-like. These social dynamics create important design implications that need to be studied and carefully considered. Ultimately, the success of AI innovation and adoption will depend, in part, on how well we understand this unique element of Human-AI interactions.
One quick thing…
Volunteer for a 10-minute survey about UXR job interviews
Have you (1) recently interviewed for UX research roles, or (2) hired UX researchers?and I are exploring the current state of UXR job interviews. The survey will remain open until May 7th.
We’ll be sharing our results later this summer.
Until next time…
I know that definitions for these technologies vary and overlap (e.g., Computers or robots can have artificial intelligence, etc.). For this article, I’ll be using these terms as follows; Computer: An electronic device that can perform various operations and store its processed data. Includes a processing unit and peripheral input & output devices for user interaction. Robot: A physical machine that is programmed to perform a set of tasks, capable of working autonomously or semi-autonomously. Artificial Intelligence/AI: Software-based technology that allows a machine, program, or computer system to perform tasks that would typically require a human (learning, problem-solving, image recognition, decision-making, NLP, etc.).