A new DARPA project will attempt to show us the brains behind AI.
Does the thought of giving Siri more power unsettle you? What about self-driving cars — do you feel uneasy about letting a computer take the wheel?
If you answered yes, you’re not alone. As AI systems become more functional and widespread, a large segment of the public has been slow to trust the tech. A highly publicized study last year called the ethics of self-driving cars into question, concluding that most people wouldn’t want to ride in the cars because they don’t trust the systems making the decisions.
That’s one of the reasons why the Defense Advanced Research Projects Agency (DARPA) recently handed eight Oregon State University computer science professors a $6.5 million research grant to work on a project to help make robots, cars, and other tech powered by AI more trustworthy for doubters.
The biggest issue behind the lack of trust is that programmers cede some aspects of control with automated learning as the neural networks train themselves, making them a so-called "black box" — in other words, they’re tough to understand. Rather than being programmed for specific responses to commands, there’s a potential that the system could act in ways no one can predict when asked to make a choice.
The DARPA-funded OSU program aims to open up that box for more people. It will run for four years, with a focus on illustrating how machines make decisions.
Alan Fern, the principal investigator for the grant, said that the project will aim to make the deep network decisions the software makes appear more natural for its human audience by translating them into visualizations and even sentences. That’s right: it’ll be like Inside Out for the AI brain.
To develop the system, the researchers will plug AI-powered players into real-time strategy games like Star Craft. The bots will be trained to explain their in-game decisions to human players.
It’s not clear exactly if or how and the research will be applied to consumer-facing tech like digital AI assistants and self-driving cars -but it’s early on in the process.
Once the initial research establishes the groundwork for the project, its results will be applied to other DARPA projects dealing with everything from robotics to unmanned aerial vehicles, aka drones.
DARPA’s efforts are far from the only projects looking to humanize automated systems; some companies are working to build up their AI-powered platforms with an extra communicative layer to their tech. Drive.AI, for example, is working to develop a line of self-driving cars that can interact with other cars and pedestrians through obvious audio-visual cues.
Once we trust decision making processes driving AI systems, it’ll be easier to accept its more blatant applications in our everyday lives. Then, we might even feel better about the potential longer term ramifications of the tech, like when it takes our jobs and overthrows human society.