Session 9B: DNN Attack Surfaces


Authors, Creators & Presenters: Yanzuo Chen (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)
PAPER
Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces in DNN Executables
Recent analysis has proven that bit-flip assaults (BFAs) can manipulate deep neural networks (DNNs) through DRAM Rowhammer exploitations. For excessive-stage DNN fashions working on deep studying (DL) frameworks like PyTorch, in depth BFAs have been performed to flip bits in mannequin weights and proven efficient. Defenses have additionally been proposed to protect mannequin weights. Nevertheless, DNNs are more and more compiled into DNN executables by DL compilers to leverage {hardware} primitives. These executables manifest new and distinct computation paradigms; we discover current analysis failing to precisely seize and expose the assault floor of BFAs on DNN executables. To this finish, we launch the primary systematic examine of BFAs on DNN executables and reveal new assault surfaces uncared for or underestimated in earlier work. Specifically, prior BFAs in DL frameworks are restricted to attacking mannequin weights and assume a robust whitebox attacker with full information of sufferer mannequin weights, which is unrealistic as weights are sometimes confidential. In distinction, we discover that BFAs on DNN executables can obtain excessive effectiveness by exploiting the mannequin construction (normally saved within the executable code), which solely requires realizing the (typically public) mannequin construction. Importantly, such construction-primarily based BFAs are pervasive, transferable, and extra extreme (e.g., single-bit flips result in profitable assaults) in DNN executables; additionally they slip previous current defenses. To realistically show the brand new assault surfaces, we assume a weak and extra practical attacker with no information of sufferer mannequin weights. We design an automatic device to determine susceptible bits in sufferer executables with excessive confidence (70% in comparison with the baseline 2%). Launching this device on DDR4 DRAM, we present that only one.4 flips on common are wanted to totally downgrade the accuracy of sufferer executables, together with quantized fashions which may require 23× extra flips beforehand, to random guesses. We comprehensively consider 16 DNN executables, masking three massive-scale DNN fashions educated on three generally-used datasets compiled by the 2 hottest DL compilers. Our discovering requires incorporating safety mechanisms in future DNN compilation toolchains.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters data alternate amongst researchers and practitioners of community and distributed system safety. The target market contains these focused on sensible points of community and distributed system safety, with a deal with precise system design and implementation. A serious aim is to encourage and allow the Internet neighborhood to use, deploy, and advance the state of accessible safety applied sciences.


Our due to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s excellent NDSS Symposium 2025 Conference content material on the Organizations’ YouTube Channel.

Permalink

*** This is a Security Bloggers Network syndicated weblog from Infosecurity.US authored by Marc Handelman. Read the unique put up at: https://www.youtube-nocookie.com/embed/Dg4jzCVpu5Y?si=u4Ei961fMNom1yIz



Sources

Leave a Reply

Your email address will not be published. Required fields are marked *