Groundbreaking theory, big data, and computing power—with this trifecta, the extraordinary advent of deep learning seems almost inevitable. It propels computer vision and real-time operational and industrial applications to new heights. Inevitably, our reliance on deep learning increases as we ride this accelerating wave of progress. And yet, this very momentum could be propelling us to a dangerous place. Deep learning’s deployment exposes new and particular vulnerabilities. In this session, Luba Gloukhova will survey the various forms of adversarial attacks that have emerged against neural networks and the state-of-the-art methods for defense.
Presenter
Luba Gloukhova
Speaker & Consultant, Data Science
San Francisco, California USA
For existing members sign in here.
In his talk on "Impact of Sentiment Analysis on Improving Fake News Detection," Sanjaikanth E. Vadakkethil Somanathan Pillai addresses a critical issue exacerbated...
Find out moreSoftware for Humanity is a thought-provoking and immersive event that brings together software engineers, developers,...
Find out moreWhat is the traditional approach to performance monitoring on the Mainframe? Industry professionals know that...
Find out more