Jeyavijayan Rajendran - Texas A&M Univ., College Station, TX
Yier Jin - Univ. of Florida, Gainesville, FL Mark Tehranipoor - Univ. of Florida, Gainesville, FL Ahmad-Reza Sadeghi - Technische Univ. Darmstadt, Germany N Asokan - Aalto Univ., Aalto, Finland
Artificial Intelligence (AI), especially the Deep Neural Networks (DNN), has achieved great successes in various real-world applications ranging from facial/speech recognitions to medical image analyses. A recent trend combining AI techniques with cybersecurity challenges has resulted in various AI-powered cybersecurity solutions such as network intrusion detections and binary malware analyses. Although still at its early stage, AI-based cybersecurity approaches have become hot research topics and are believed to replace traditional solutions such as signature-based abnormality detection methods in the near future.
Nevertheless, many DNNs are vulnerable to adversarial example attacks. These carefully-crafted input examples can easily fool a DNN to output incorrect classification results in the test stage. The advances in transfer learning make the problem even worse such that black-box attacks become possible and effective. Therefore, how to protect AI itself is an urgent research topic for both the AI community and the cybersecurity community.
In this panel, we try to address a seemingly ironic question: Shall we make AI robust and resilient enough first before we ever apply AI techniques in cybersecurity solutions and, if so, what are the challenges to develop secure AI techniques? However, if the answer is no, then why should we still trusted AI-supported cybersecurity solutions while the underlying AI techniques are vulnerable to various attacks?
Bita Rouhani - Microsoft Research Brendan Dolan-Gavitt - New York Univ., New York, Lejla Batina - Radboud Univ. Nijmegen, Nijmegen, Netherlands Antilles Rosario Cammarota - Intel Corp., Hillsboro, OR