Journals - MOST Wiedzy

TASK Quarterly

A Survey on Privacy-Preserving Machine Learning Inference

Abstract

This paper examines methods to secure machine learning inference (ML inference) so that sensitive data remains private and proprietary models are protected during remote processing. We review several approaches—from cryptographic techniques like homomorphic encryption (HE) and secure multi-party computation (MPC), to hardware solutions such as trusted execution environments (TEEs), and complementary methods including differential privacy and split learning. Each method is analyzed in terms of security, efficiency, communication overhead, and scalability. Use cases in healthcare, finance, and education show how these techniques balance privacy with practical performance. We conclude by outlining open challenges and future directions for building robust, efficient privacy-preserving ML inference systems.

Keywords:

Privacy-Preserving Machine Learning, Oblivious Neural Networks, Homomorphic Encryption, Secure Multi-Party Computation, Trusted Execution Environments, Differential Privacy, Split Learning

Details

Issue
Vol. 28 No. 2 (2024)
Section
Review
Published
2025-07-18
DOI:
https://doi.org/10.34808/tq2024/28.2/b
Licencja:

Copyright (c) 2025 TASK Quarterly

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors

Stanisław Barański

Gdańsk University of Technology https://orcid.org/0000-0001-7181-8860 ##linkOpensInNewTab##

Download paper