Federated Learning for Cloud-Native Applications: Enhancing Data Privacy in Distributed Systems
Abstract
Federated Learning (FL) has emerged as a groundbreaking approach to machine learning, enabling the development of models across decentralized devices while ensuring that sensitive data remains on the local nodes. For cloud-native apps, where data security and privacy are crucial, this paradigm is especially pertinent. Conventional machine learning techniques frequently call for centralised data collecting, which presents issues with data privacy, regulatory compliance (such as GDPR), and the potential for centralised data breaches. By training models directly on distributed networks without requiring raw data to leave particular devices or servers, federated learning helps to mitigate these issues. The integration of Federated Learning in cloud-native applications is examined in this study along with its effects on data security and privacy. To improve trust in cloud-based distributed systems, we explore important topics including model aggregation, communication efficiency, and privacy-preserving strategies (such safe multi-party computing and differential privacy). We also look into the real-world difficulties of putting Federated Learning into practice.