The Duke Funding Alert newsletter, published every Monday, provides information on all new and updated grants and fellowships added to the database during the prior week. This listserv is restricted to members of the Duke community.
Semantic Forensics (SemaFor)
DARPA is soliciting innovative research proposals in the area of semantic technologies to automatically assess falsified media. Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.
The Semantic Forensics (SemaFor) program will develop technologies to automatically detect, attribute, and characterize falsified, multi-modal media assets (e.g., text, audio, image, video) to defend against large-scale, automated disinformation attacks.
Statistical detection techniques have been successful, but media generation and manipulation technology are advancing rapidly. Purely statistical detection methods are quickly becoming insufficient for detecting falsified media assets. Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources (algorithm development, data, or compute). However, existing automated media generation and manipulation algorithms are heavily reliant on purely data driven approaches and are prone to making semantic errors. For example, GAN-generated faces may have semantic inconsistencies such as mismatched earrings. These semantic failures provide an opportunity for defenders to gain an asymmetric advantage. A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.
SemaFor seeks to develop innovative semantic technologies for analyzing media. Semantic detection algorithms will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.
o Abstract Due Date: September 11, 2019, 12:00 noon (ET)
o Proposal Due Date: November 21, 2019, 12:00 noon (ET)
Areas of Interest
Proposers may submit proposals to all TAs. However, each proposal may only address one TA. DARPA will not make TA1 and TA2 awards to the same performer. The TA3 performer may not perform on TA1 or TA2 due to an inherent conflict of interest with the evaluation process. TA4 performers may be awarded contracts on other TAs of the program, but conflicts of interest plans will be required in the case of TA1 or TA2 due to potential conflicts of interest with the evaluation process. Proposers interested in multiple TAs must submit separate proposals for each TA. In the event that multiple proposals are deemed selectable, the Government reserves the right to choose which to fund, in accordance with the conflict of interest rules described above.
- TA1 Detection, Attribution, Characterization
- TA2 Explanation and Integration
- TA3 Evaluation
- TA4 Challenge Curation
DARPA welcomes engagement from all responsible sources capable of satisfying the Government's needs, including academia (colleges and universities); businesses (large, small, small disadvantaged, etc.); other organizations (including non-profit); other entities (foreign, domestic, and government); FFRDCs; minority institutions; and others.
At the time of proposal submission, all proposers wishing to submit proposals under TA2 and TA3 must have personnel with a Top Secret clearance that are eligible for SCI and access to facilities that can store, process, and hold SCI discussions. TA2 and TA3 proposers must provide their CAGE code and security point(s) of contact in their proposals.
Proposers to TA1 or TA4 are not required to hold or obtain security clearances; however, TA1 or TA4 proposers who wish to have access to classified data and evaluation results for their efforts must have personnel and access to facilities with a minimum classification level of SECRET at the time of award and must provide their CAGE code and security point(s) of contact in their proposals.
DARPA anticipates multiple awards for TA1 and TA4, as well as single awards for TA2 and TA3. The level of funding for individual awards made under this solicitation has not been predetermined and will depend on the quality of the proposals received and the availability of funds. Awards will be made to proposers whose proposals are determined to be the most advantageous to the Government, all factors considered, including the potential contributions of the proposed work, overall funding strategy, and availability of funding. See Section V for further information.