In the ever-evolving landscape of data-driven technologies, a groundbreaking study led by YAO Yupeng, WEI Lifei, and ZHANG Lei from the College of Information Technology at Shanghai Ocean University and the College of Information Engineering at Shanghai Maritime University has shed light on a novel approach to secure federated learning. This research, published in the journal ‘Jisuanji gongcheng’ (translated to ‘Computer Engineering’), introduces a scheme called APFL that tackles both privacy inference and malicious client poisoning attacks, a significant leap forward in the realm of data security.
Federated learning, a decentralized machine learning approach, allows multiple participants to collaboratively train a model without sharing their raw data. This method has been a game-changer in addressing privacy concerns in distributed data environments. However, as the technology advances, so do the security challenges. The researchers identified a gap in existing improvements, which primarily focus on either privacy protection or defending against poisoning attacks, but not both simultaneously.
APFL, the scheme proposed by the researchers, is designed to bridge this gap. It employs Differential Privacy (DP) techniques to assign aggregation weights to each client based on the cosine similarity between the models. This means that the system can detect and filter out malicious models, ensuring that the aggregated model remains robust and accurate. Additionally, Homomorphic encryption techniques are used for the weighted aggregation of local models, adding an extra layer of security.
The researchers tested APFL on the MNIST and CIFAR10 datasets, demonstrating its effectiveness in filtering out malicious models and defending against poisoning attacks while ensuring data privacy. Impressively, when the poisoning ratio was no more than 50%, APFL achieved a model performance consistent with the Federated Averaging (FedAvg) scheme in a non-poisoned environment. Compared with the Krum and FLTrust schemes, APFL exhibited average reductions of 19% and 9% in model test error rate, respectively.
For the maritime sector, the implications of this research are profound. As maritime operations become increasingly data-driven, the need for secure and privacy-preserving data sharing becomes paramount. Federated learning, with its ability to enable collaborative modeling without revealing raw data, is particularly relevant. The APFL scheme, with its enhanced security features, could be a game-changer in ensuring the integrity and privacy of data shared among maritime stakeholders.
As YAO Yupeng, the lead author, puts it, “Our research aims to address the critical security concerns in federated learning, thereby paving the way for its wider adoption in industries where data privacy and security are paramount.” This sentiment is echoed by WEI Lifei, who emphasizes the potential of APFL in enhancing the robustness and reliability of federated learning systems.
In conclusion, the APFL scheme represents a significant advancement in the field of federated learning, offering a robust solution to the dual challenges of privacy inference and poisoning attacks. Its potential applications in the maritime sector are vast, promising to enhance data security and privacy in an increasingly interconnected world. As the researchers continue to refine and expand their work, the maritime industry stands to benefit greatly from these technological advancements.

