A toddler with a Sharpie and a blank wall is a parents’ nightmare. It’s hard to prevent bad choices when the child can’t explain the reasoning behind them.
Aritificial intelligence presents a similar challenge. Where much of today’s aritificial intelligence simply takes in information and pushes out data, users of software that manages massive amounts of data need to understand exactly how the systems approach a decision.
To trust increasingly sophisticated AI systems and make them more dependable, the government, industry and academia are studying how such machines make decisions. Raytheon is funding an internal research project that might provide some answers.
“People are hesitant to depend on AI for critical decision-making because there’s no way it can explain the why,’” said Raytheon engineering fellow Gabriel Comi. “My team’s research is providing a first step toward a trusted solution.”
Comi is working on a concept called “explainable artificial intelligence,” using software he calls “the AI whisperer.” The code is written to interrogate another computer program systematically and thoroughly. Using responses from the subject software, the AI whisperer maps the information provided that’s most relevant to producing the output. Like a researcher, a large part of Comi’s job is trying multiple configurations to see which ones work best.
Comi’s research is designed to ensure human operators get the most out of aritifical intelligence technology.
“The goal is to allow people to work more efficiently and faster than our adversary. It is not about abdicating our authority,” said Comi. “As information becomes the dominant currency in conflict and the quantity of data grows exponentially, we need tools like this to close the gap. To do that, we need to rely on and trust artificial intelligence and machine learning.”
Instead of certainty and cause, AI works off probability and correlation, according to Comi. Studying their decision-making processes helps to make machines accountable for the things they learn, in ways that we can understand.
“As AI continues to advance, we need to be able to explain our answers to demonstrate that we arrived at those answers correctly,” said Comi. “That is something we all experienced in grade school when our teachers told us to show our work.”
Current technology cannot “show its work like in math class,” said Comi, because there is no way to ask the program to explain why it produced the answer it did. That’s what his work is changing.
If Comi’s prototype is successful, it could go a long way toward developing trust in man-machine systems, something the U.S. Department of Defense, among others, wants to explore.
Man-machine teaming will help DoD forces keep up with the ever-changing landscape of threats.
“What is coming is amazing,” Comi said.