The Duke Funding Alert newsletter, published every Monday, provides information on all new and updated grants and fellowships added to the database during the prior week. This listserv is restricted to members of the Duke community.
Explainable Artificial Intelligence (XAI)
DARPA is soliciting innovative research proposals in the areas of machine learning and humancomputer interaction. The goal of Explainable Artificial Intelligence (XAI) is to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems. Proposed research should investigate innovative approaches that enable revolutionary advances in science, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.
The target of XAI is an end user who depends on decisions, recommendations, or actions produced by an AI system, and therefore needs to understand the rationale for the system’s decisions. For example, an intelligence analyst who receives recommendations from a big data analytics algorithm needs to understand why the algorithm has recommended certain activity for further investigation. Similarly, a test operator of a newly developed autonomous system will need to understand why the system makes its decisions so that he/she can decide how to use it in future missions. Figure 1 illustrates the XAI concept—to provide end users with an explanation of individual decisions, enable users to understand the system’s overall strengths and weaknesses, convey an understanding of how the system will behave in the future, and perhaps how to correct the system’s mistakes.
o Abstract Due Date: September 1, 2016
o Proposal Due Date: November 1, 2016