University Study Examines Workplace Trust Between AI and Humans

From asking Siri or Alexa for help to buying things from Amazon, artificial intelligence is an increasingly common assistant in our lives. But what if it played a bigger role? A new Florida Tech study will examine workers who use AI as a teammate rather than just a tool.

Three Florida Tech faculty members – Meredith Carroll, aviation human factors professor; Amanda Thayer, industrial and organizational (I/O) psychology assistant professor; and Jessica Wildman, psychology associate professor – begin work this month on, “Trust Dynamics in Heterogeneous Human-Agent Teams: Applying Multilevel and Unobtrusive Perspectives.” This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0294. Funded by a three-year, $650,000 grant from the U.S. Air Force Office of Scientific Research’s Trust and Influence program, the study will feature AI-enhanced systems to see how workers trust in and collaborate with them, with the goal of teaching the workers to better interact with the AI for automated tasks.

When looking at an organizational workforce, one thing the researchers indicated that goes unnoted is the importance of the automated systems and the worker’s interaction with them, which usually allows tasks to get completed more efficiently. For Thayer, understanding how humans and machines interact is becoming increasingly important as more fields are seeing the use of automation and machine learning grow.

“On the I/O side, there’s a good amount of research now that’s focusing on team composition, or how to compose and build effective teams,” Thayer said. “What has been missing has largely been thinking about agents, or computers or machines, as teammates.”

This study will look at trust violation between humans and AI agents and what causes that to happen. Trust violation can lead to inefficiencies such as a human checking the AI system multiple times or working on the task manually. Organizational leaders often want to know when a team is having a trust problem, so part of the research is focused on figuring out ‘what it looks like’ when a team of humans and agents are trusting one another – or not. If the humans are repeatedly double-checking agents or taking over and doing the agent’s tasks for them, for example, that could indicate the team is having a trust problem.

Though the Air Force study is in the preliminary stages, the researchers are looking at previous and ongoing work to further understand human-AI interaction. One study underway at Florida Tech’s ATLAS Lab looks at a task that utilizes unmanned aerial systems, or drones, to monitor military routes for enemies, Carroll said. The human operators can either have the drones merely take the pictures, or the drones can also make a recommendation and the operator can decide how to utilize this information, or the operator can have the drone decide, with the operator having veto power.

Wildman noted past research and thinking around what a team of a few humans and a few automated systems looks like as a social unit, as well as how the behavior of one machine affects the attitudes of the humans towards other machines.

“Imagine one agent makes a mistake, but the other ones did not,” Wildman said. “Do I now not trust the entire set of agents? Do I only not trust that one agent that made a mistake? Because in a human-human interaction, there is research that looks at transference, and if you’re really similar to another kind of person and I don’t have anything else to go off of, I might assume you’re likely to act like a person that’s like you.”

In the first year, the research team will design a theoretical, multilevel dynamic framework of trust in teams of humans and AI. The researchers will also review recent progress to inform development of a theoretical model that will be validated in later experimentation. In the second and third year, the team will conduct a series of experiments designed to validate aspects of the model such as the influence of different types of trust violations and repair strategies in teams, and how various compilation patterns of these events across teammates impact trust.

Further understanding the relationship between humans and AI may also give those in the commercial sector further insight as to how to get technology and humans working better as a team.

“I think we’re going to see more and more autonomy. The AI we use in our everyday lives may not seem like a teammate, but it is all around us and we are collaborating with it,” Carroll said.

* Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force.

Show More
Back to top button
Close