Adversarial Robustness of Deep Learning-based Malware Detectors via (De)Randomized Smoothing

dc.contributor.author
Gibert Llauradó, Daniel
dc.contributor.author
Zizzo, Giulio
dc.contributor.author
Le, Quan
dc.contributor.author
Planes Cid, Jordi
dc.date.issued
2024-05-01
dc.identifier
https://doi.org/10.1109/ACCESS.2024.3392391
dc.identifier
2169-3536
dc.identifier
https://hdl.handle.net/10459.1/465657
dc.description.abstract
Deep learning-based malware detectors have been shown to be susceptible to adversarial malware examples, i.e. malware examples that have been deliberately manipulated in order to avoid detection. In light of the vulnerability of deep learning detectors to subtle input file modifications, we propose a practical defense against adversarial malware examples inspired by (de)randomized smoothing. In this work, we reduce the chances of sampling adversarial content injected by malware authors by selecting correlated subsets of bytes, rather than using Gaussian noise to randomize inputs like in the Computer Vision domain. During training, our chunk-based smoothing scheme trains a base classifier to make classifications on a subset of contiguous bytes or chunk of bytes. At test time, a large number of chunks are then classified by a base classifier and the consensus among these classifications is then reported as the final prediction. We propose two strategies to determine the location of the chunks used for classification: (1) randomly selecting the locations of the chunks and (2) selecting contiguous adjacent chunks. To showcase the effectiveness of our approach, we have trained two classifiers with our chunk-based smoothing schemes on the BODMAS dataset. Our findings reveal that the chunk-based smoothing classifiers exhibit greater resilience against adversarial malware examples generated with state-of-the-art evasion attacks, outperforming a non-smoothed classifier and a randomized smoothing-based classifier by a great margin.
dc.description.abstract
This project has received funding from Enterprise Ireland and the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 847402 and by MCIN/AEI/10.13039/501100011033/FEDER, UE under the project PID2022-139835NB-C22.
dc.format
application/pdf
dc.language
eng
dc.publisher
Institute of Electrical and Electronics Engineers
dc.relation
info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2022-139835NB-C22/ES/PROCESAMIENTO DE INCONSISTENCIAS BASADO EN LOGICA PARA SISTEMAS INTELIGENTES EXPLICABLES: APLICACIONES/
dc.relation
Reproducció del document publicat a https://doi.org/10.1109/ACCESS.2024.3392391
dc.relation
IEEE Access, 2024, vol. 12, p. 61152-61162
dc.relation
info:eu-repo/grantAgreement/EC/H2020/847402/EU/Career-FIT PLUS
dc.rights
cc-by (c) Daniel Gibert et al., 2024
dc.rights
info:eu-repo/semantics/openAccess
dc.rights
https://creativecommons.org/licenses/by/4.0/
dc.subject
Adversarial defense
dc.subject
(de)randomized smoothing
dc.subject
Evasion attacks
dc.subject
Machine learning
dc.title
Adversarial Robustness of Deep Learning-based Malware Detectors via (De)Randomized Smoothing
dc.type
info:eu-repo/semantics/article
dc.type
info:eu-repo/semantics/publishedVersion


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)