DARPA is soliciting innovative research proposals in the area of theoretical foundations, principled algorithms, and evaluation frameworks that significantly improve the robustness of machine learning systems to adversarial attacks. Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to current practices.
The GARD program will develop a new generation of defenses against deception attacks on machine learning (ML). The program is soliciting game-changing research proposals to develop theory, create defenses, and implement appropriate testbeds leading to robust, deception-resistant ML/AI algorithms. Proposed research should investigate defenses that address entire threat scenario classes. Specifically excluded is research solely focused on developing defenses to specific attacks rather than addressing broad issues of defensibility.
The growing sophistication and ubiquity of ML components in advanced systems dramatically increase capabilities, but as a byproduct, also increases the potential for new vulnerabilities. The current era of adversarial AI focuses on approaches where imperceptible perturbations to ML inputs could deceive an ML classifier, significantly altering its response. Such results have initiated a rapidly proliferating field of research characterized by ever more complex attacks that require progressively less knowledge about the ML system being attacked, while proving increasingly strong against defensive countermeasures.
In summary, GARD’s purpose is to encourage both development of underlying theory and to functionally and substantially improve ML defensibility, leading to a new generation of defense approaches beyond current mathematical and algorithmic thinking.
Deadlines:
o Abstract Due Date: February 26, 2019, 12:00 noon (ET)
o Proposal Due Date: April 11, 2019, 12:00 noon (ET)