The Duke Funding Alert newsletter, published every Monday, provides information on all new and updated grants and fellowships added to the database during the prior week. This listserv is restricted to members of the Duke community.
Systematizing Confidence in Open Research and Evidence (SCORE)
The Defense Sciences Office (DSO) at the Defense Advanced Research Projects Agency (DARPA) is soliciting innovative research proposals for the development and deployment of automated tools to assign Confidence Scores (CSs) to different kinds of Social and Behavioral Science (SBS) research results and claims. CSs are quantitative measures that should enable someone to understand the degree to which a particular claim or result is likely to be reproducible and/or replicable. These tools will assign explainable CSs with a reliability that is equal to, or better than, the best current human expert methods and will enable a consumer of SBS research to quickly calibrate the level of confidence in the Reproducibility and Replicability (R&R) of a given SBS result or claim. Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.
The vision of the SCORE program therefore is to test, validate, and demonstrate the feasibility and utility of one or more automated tools for assigning CSs to a wide range of SBS claims. SCORE seeks to realize this vision through a two-phase program. The first Phase will focus on developing the initial Common Task Framework (CTF) for SCORE, with a curated dataset of SBS research claims using methods for rapidly but accurately labeling those data with human expert CSs. Early algorithm development will also occur in Phase 1 as proof of principle. In Phase 2, performers will use those labeled data to train and test algorithms that will assign quantitative CSs. These algorithm-based CSs will be compared to CSs assigned by the best performing human expert methods to see how they overlap. If successful, this program will enable SBS consumers within the DoD and the U.S. Government to use SCORE algorithms to quickly, accurately, and iteratively calibrate the confidence they should have in a particular SBS claim’s R&R. SCORE deliverables should have significant positive impact on DoD and USG’s abilities to leverage SBS for modeling, planning for, and operating in, the Human Domain.
Abstract Due Date:
- For TA1 Abstracts: June 20, 2018, 4:00 p.m. EST
- For TA2 Abstracts: June 20, 2018, 4:00 p.m. EST
- For TA3 Abstracts: January 31, 2019 (was Nov. 1, 2018)
Full Proposal Due Date:
- For TA1 Proposals: August 1, 2018, 4:00 p.m. EST
- For TA2 Proposals: August 1, 2018, 4:00 p.m. EST
- For TA3 Proposals: March 12, 2019 (was Dec. 12, 2018)
Areas of Interest
• TA1: Data
• TA2: Experts
• TA3: Algorithms
Proposers may submit or be listed on multiple proposals provided all of those proposals address only one TA. (see Section I.D). Please note that a proposer can only be selected for one Technical Area and cannot be selected for any portion of the other two Technical Areas, whether as a prime proposer, subawardee, or in any other capacity from an organizational to individual level. This is to avoid OCI situations between the Technical Areas and to ensure objective test and evaluation results.