Skip to content
A new trust survey instrument that we are in the process of validating
Research on judgment and decision making has reliably shown that humans utilize heuristic strategies when faced with stressful decision making tasks and complex environments. These heuristic strategies are a common paradigm in expert decision making and often help experienced operators make fast and accurate decisions by ignoring pieces of information.
Decision support systems (DSS) are valuable tools for information management and decision making guidance, and are vital in decision environments that are inherently dynamic or that leverage a time pressure on the operator. In this work, we detail the application and effectiveness of several decision support techniques in a time-limited decision environment.
Human-robot interaction and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. Human-robot interaction has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory concentrated on the psychology and strategies behind singular trust decisions.
One of the greatest challenges to measuring human- robot trust is the sheer amount of construct proliferation, models, and available questionnaires, with little to no validation for the majority. This work identified the most frequently cited human- automation trust questionnaires, pinpointing ten validated studies spanning 201 questions. From these, we determined nine distinct common constructs that form the dimensions and antecedents of human-robot trust.
Effective human-aware robots should anticipate their user’s intentions. During hand-eye coordination tasks, gaze often precedes hand motion and can serve as a powerful predic- tor for intent. However, cooperative tasks where a semi- autonomous robot serves as an extension of the human hand have rarely been studied in the context of hand-eye coordi- nation. We hypothesize that accounting for anticipatory eye movements in addition to the movements of the robot will improve intent estimation.
One challenge of physical human robot interaction has been modelling tightly coupled human-machine systems to design effective controllers for them. The objective of this research is to learn human operators performance character- istics from surface electromyography measurements to predict their intentions during task operations.
A summary of research on using sEMG, gaze, and scene data to dynamically recognize current intent during a task and predict future ones.
Trust in human-automation interaction is increasingly imperative as AI and robots become ubiqui- tous at home, school, and work. Interdependence theory allows for the identification of one-on-one interactions that require trust by analyzing the structure of the potential outcomes. This paper synthesizes multiple, formerly disparate research approaches by extending Interdependence theory to create a unified framework for outcome-based trust in human-automation interaction.
With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety.
Cognitive Engineering Center (CEC) Georgia Institute of Technology 270 Ferst Drive Atlanta GA 30332-0150