Adversarial Robustness of Deep Learning-based Malware Detectors via (De)Randomized Smoothing

Autor/a

Gibert Llauradó, Daniel

Zizzo, Giulio

Le, Quan

Planes Cid, Jordi

Data de publicació

2024-05-01

Resum

Deep learning-based malware detectors have been shown to be susceptible to adversarial malware examples, i.e. malware examples that have been deliberately manipulated in order to avoid detection. In light of the vulnerability of deep learning detectors to subtle input file modifications, we propose a practical defense against adversarial malware examples inspired by (de)randomized smoothing. In this work, we reduce the chances of sampling adversarial content injected by malware authors by selecting correlated subsets of bytes, rather than using Gaussian noise to randomize inputs like in the Computer Vision domain. During training, our chunk-based smoothing scheme trains a base classifier to make classifications on a subset of contiguous bytes or chunk of bytes. At test time, a large number of chunks are then classified by a base classifier and the consensus among these classifications is then reported as the final prediction. We propose two strategies to determine the location of the chunks used for classification: (1) randomly selecting the locations of the chunks and (2) selecting contiguous adjacent chunks. To showcase the effectiveness of our approach, we have trained two classifiers with our chunk-based smoothing schemes on the BODMAS dataset. Our findings reveal that the chunk-based smoothing classifiers exhibit greater resilience against adversarial malware examples generated with state-of-the-art evasion attacks, outperforming a non-smoothed classifier and a randomized smoothing-based classifier by a great margin.


This project has received funding from Enterprise Ireland and the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 847402 and by MCIN/AEI/10.13039/501100011033/FEDER, UE under the project PID2022-139835NB-C22.

Tipus de document

Article
Versió publicada

Llengua

Anglès

Matèries i paraules clau

Adversarial defense; (de)randomized smoothing; Evasion attacks; Machine learning

Publicat per

Institute of Electrical and Electronics Engineers

Documents relacionats

info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2022-139835NB-C22/ES/PROCESAMIENTO DE INCONSISTENCIAS BASADO EN LOGICA PARA SISTEMAS INTELIGENTES EXPLICABLES: APLICACIONES/

Reproducció del document publicat a https://doi.org/10.1109/ACCESS.2024.3392391

IEEE Access, 2024, vol. 12, p. 61152-61162

info:eu-repo/grantAgreement/EC/H2020/847402/EU/Career-FIT PLUS

Drets

cc-by (c) Daniel Gibert et al., 2024

https://creativecommons.org/licenses/by/4.0/

Aquest element apareix en la col·lecció o col·leccions següent(s)