Sponsor Deadline
Posted: 3/15/2022

In the Moment (ITM)

The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is soliciting innovative research proposals for research and technology development that supports the building, evaluating, and fielding of algorithmic decision-makers that can assume human-offthe-loop decision-making responsibilities in difficult domains, such as combat medical triage. Difficult domains are those where trusted decision-makers disagree; no right answer exists; and uncertainty, time-pressure, resource limitations, and conflicting values create significant decision-making challenges. Other examples of difficult domains include first response and disaster relief. Two specific domains have been identified for this effort - small unit triage in austere environments and mass casualty triage.

The Department of Defense (DoD) continues to expand its usage of Artifical Intelligence (AI) and computational decision-making systems. DoD missions involve making many decisions rapidly in challenging circumstances and algorithmic decision-making systems could address and lighten this load on operators. In order to employ such systems, the DoD needs rigorous, quantifiable, and scalable approaches for building and evaluating these systems. Current AI evaluation approaches often rely on datasets such as ImageNet1 for visual object recognition or the General Language Understanding Evaluation (GLUE)2 for Natural Language Processing (NLP) that have well defined ground-truth, because human consensus exists for the right answer. In addition, most conventional AI development approaches implicitly require human agreement to create such ground-truth data for development, training, and evaluation. However, establishing conventional ground truth in difficult domains is not possible because humans will often disagree significantly about the right answer. Rigorous assessment techniques remain critical for difficult domains; without them, the development and fielding of algorithmic systems in such domains is untenable. In the Moment (ITM) seeks to develop techniques that enable building, evaluating, and fielding trusted algorithmic decision-makers for mission-critical DoD operations where there is no right answer and, consequently, ground truth does not exist.

Specifically, DARPA seeks capabilities that will (1) quantify the alignment of algorithmic decision-makers with key decision-making attributes of trusted humans; (2) incorporate key human decision-maker attributes into more human-aligned, trusted algorithms; (3) enable the evaluation of human-aligned algorithms in difficult domains where humans disagree and there is no right outcome; and (4) develop policy and practice approaches that support the use of human-aligned algorithms in difficult domains. Proposed research should embody innovative approaches that enable revolutionary advances in the current state of the art. Specifically excluded is research that primarily results in simply evolutionary improvements to the existing state of the art.

Deadlines:

o Abstract Due Date: March 30, 2022, 4:00 p.m.

o Full Proposal Due Date: May 17, 2022, 4:00 p.m

Areas of Interest

Technical Area 1: Decision-maker characterization: The focus of TA1 is developing technologies that identify and quantitatively model key decisionmaking attributes of trusted humans in order to produce a quantitative decision-maker alignment score.

Technical Area 2: Human-aligned algorithmic decision-makers: TA2 will develop human-aligned algorithms that leverage the TA1 computational characterization process and the quantitative alignment score (see blue panel in Figure 6). The human-aligned algorithms should be able to balance situational information with a preference for the key decision-maker attributes identified by TA1 and the reference distribution across the attribute space.

Technical Area 3: Evaluation: ITM will use a dedicated evaluation team to assess the performance of the decision-maker characterization TA (TA1) and the Human-aligned algorithmic decision-makers TA (TA2).

Technical Area 4: Policy & practice integration: For ITM’s efforts to be successful long-term, the developed approaches must perform at a highlevel and be accepted by the larger policy community, particularly within the DoD. It will be the role of TA4, the policy and practice team, to help ground the program in current DoD policy and practice and to envision future policy concepts that leverage ITM technology.

Eligibility Requirements

Proposers must submit separate proposals for each Technical Area, if proposing to more than one. A proposer selected for Technical Area 3 cannot be selected for any portion of Technical Areas 1 or 2, whether as a prime proposer, subawardee, or in any other capacity from an organizational to individual level. This is to avoid OCI situations, as defined at FAR 9.5, between the Technical Areas and to ensure objective test and evaluation results. The decision as to which proposal to consider for award is at the discretion of the Government.

Amount Description

DARPA anticipates multiple awards for TA1 and TA2 and single awards each for TA3 and TA4. The level of funding for individual awards made under this BAA will depend on the quality of the proposals received and the availability of funds.

Funding Type
Eligibility
Posted
3/15/2022
Deadline
Sponsor: