dc.contributor.author
Ozan Güner, Oktay
dc.date.accessioned
2025-10-22T19:50:36Z
dc.date.available
2025-10-22T19:50:36Z
dc.date.issued
2025-10-20T14:33:35Z
dc.date.issued
2025-10-20T14:33:35Z
dc.identifier
http://hdl.handle.net/10230/71581
dc.identifier.uri
http://hdl.handle.net/10230/71581
dc.description.abstract
Treball fi de màster de: Master in Intelligent Interactive Systems
dc.description.abstract
Supervisor: Mario Ceresa
Co-Supervisor: Vicenç Gómez
dc.description.abstract
The widespread use of wearables in health applications has advanced personalized Human Activity Recognition (HAR), but it also introduces privacy challenges under regulations such as the European Health Data Space (EHDS). This thesis explores fine-tuning strategies for self-supervised learning models on wearable sensor data to balance user privacy and model utility within the stringent EU regulatory landscape. We employ Differentially Private Stochastic Gradient Descent (DP-SGD) to fine-tune the pre-trained HarNet10 model on the PAMAP2 dataset, evaluating two distinct strategies: classifier head fine-tuning and full model fine-tuning. Our two sequential experimental design, first investigates the privacy-utility trade off between the two strategies, revealing that classifier head fine-tuning consistently outperforms the full model approach by maintaining higher accuracy and F1-scores. This strategy better preserves the rich, pre-trained representations in the feature extractor, mitigating the impact of DP-SGD’s noise. Second, an empirical privacy evaluation using a membership inference attack confirms these findings. The differentially private classifier head model demonstrates robust protection, reducing the attack’s success to near random guessing (AUC score of 0.532) compared to the vulnerable non-private baseline (AUC of 0.690), thus aligning theoretical guarantees with practical resilience. These results confirm that classifier head fine-tuning with DP-SGD offers an optimal privacy-utility balance for HAR tasks compared to its full model variant. Our study contributes a validated framework for developing trustworthy AI in wearables,
demonstrating that effective privacy can be achieved by fine-tuning a small fraction (4.83%) of a foundational model’s parameters. This research provides a practical reference for building secure, privacy preserving solutions that align with EU regulations.
dc.format
application/pdf
dc.rights
Llicència CC Reconeixement-NoComercial-CompartirIgual 4.0 Internacional (CC BY-NC-SA 4.0)
dc.rights
https://creativecommons.org/licenses/by-nc-sa/4.0/
dc.rights
info:eu-repo/semantics/openAccess
dc.subject
Aprenentatge -- Models
dc.title
Differentially private fine-tuning of self-supervised learning models for human activity recognition on wearables
dc.type
info:eu-repo/semantics/masterThesis