IMPACT2020 | Deep Learning’s Most Dangerous Vulnerability: Adversarial Attacks – Luba Gloukhova

You can’t always get what you want. Or can you? An Evaluation of Cloud-Native Tools – Karen Hughes, BMC
IMPACT2020 | You can’t always get what you want. Or can you? An Evaluation of Cloud-Native Tools – Karen Hughes, BMC
February 13, 2020
Netflix Performance Tales in One Take - Ed Hunter, Engineering Leader, Netflix
IMPACT2020 | Netflix Performance Tales in One Take – Ed Hunter, Engineering Leader, Netflix
February 13, 2020
You can’t always get what you want. Or can you? An Evaluation of Cloud-Native Tools – Karen Hughes, BMC
IMPACT2020 | You can’t always get what you want. Or can you? An Evaluation of Cloud-Native Tools – Karen Hughes, BMC
February 13, 2020
Netflix Performance Tales in One Take - Ed Hunter, Engineering Leader, Netflix
IMPACT2020 | Netflix Performance Tales in One Take – Ed Hunter, Engineering Leader, Netflix
February 13, 2020

IMPACT2020 | Deep Learning’s Most Dangerous Vulnerability: Adversarial Attacks – Luba Gloukhova

Deep Learning’s Most Dangerous Vulnerability: Adversarial Attacks - Luba Gloukhova

Groundbreaking theory, big data, and computing power—with this trifecta, the extraordinary advent of deep learning seems almost inevitable. It propels computer vision and real-time operational and industrial applications to new heights. Inevitably, our reliance on deep learning increases as we ride this accelerating wave of progress. And yet, this very momentum could be propelling us to a dangerous place. Deep learning’s deployment exposes new and particular vulnerabilities. In this session, Luba Gloukhova will survey the various forms of adversarial attacks that have emerged against neural networks and the state-of-the-art methods for defense.

Presenter
Luba Gloukhova
Speaker & Consultant, Data Science
San Francisco, California USA

To view the video you must have a CMG membership. Sign up today!

For existing members sign in here.