A competency-aware multi-agent framework for human-machine teams in adversarial environments
Future combat teams operating in adversarial environments will utilise trusted autonomous systems to achieve mission objectives, such as identifying, classifying, locating and suppressing threats while ensuring safety and survival of team members. To achieve this, humans and machines will work in teams, exploiting the unique potential of each team member in completing tasks with competing objectives (such as threat localisation and identification, while minimising detection or ensuring safe egress). Such complex planning tasks cannot feasibly be solved by a single decision maker (e.g. a human team lead), and agents must coordinate behaviour to achieve required mission outcomes, relying on an understanding of the capabilities of other team members. Trust between team members, and up the command chain, is supported by explainable decisions and actions relative to mission objectives, particularly when teams include autonomous machines. In a multi-objective, multi-agent, human-machine planning problem, understanding and explaining how agent actions impact goal attainment and how actions are inter-dependent between agents will assist command personnel to effectively plan, execute, and evaluate missions in complex environments, and support uptake of trusted autonomous systems in defence teams.
Funding Agency/Company: DSTG $151,242.56