Ieee federated Machine Learning White Paper



Download 0.56 Mb.
View original pdf
Page7/15
Date16.01.2023
Size0.56 Mb.
#60383
1   2   3   4   5   6   7   8   9   10   ...   15
FederatedMachineLearning
e-Iraq estra.ar.en
efficiency is of crucial importance for various FML methods. IEEE Std 3652.1-2020 [1] devotes a great deal of attention on reducing both
computational complexity and communication costs with efficient FML methods. Third, for the sake of
data security and privacy-preservation, the design, development, and Authorized licensed use limited to University of Malta. Downloaded on December 24,2022 at 11:03:39 UTC from IEEE Xplore. Restrictions apply.

Copyright © 2021 IEEE. All rights reserved.

8
IEEE SA

implementation of federated machine learning frameworks should be carefully considered by monitoring privacy leakage and other security issues. Fourth, FML also needs to economically incentivize users to join and stay in the federation. This economic
incentive mechanism constitutes a unique feature of FML that is not included in other distributed learning paradigms. Depending on how the data are portioned between different participants, FML can be categorized as
Horizontal
FML, Vertical FML, and Federated Transfer Learning (see Figure 1). Specifically, Horizontal FML refers to building a model in the scenario where data sets have significant overlaps on the feature spaces but not on the ID spaces. For example, Google proposed a horizontal federated learning solution for Android phone model updates
(McMahan, et al. [5]). In that framework, a single user using an Android phone updates the model parameters locally and uploads the parameters to the Android cloud, thus jointly training the centralized model together with other data owners. A secure aggregation scheme to protect the privacy of aggregated user updates under their federated learning framework is also introduced (Bonawitz, et al. [6]). Vertical FML refers to building a model in the scenario where data sets have significant overlaps on the sample space, but not on the feature spaces. For example, consider two different companies in the same cityone is a bank, and the other is an e-commerce company. Their user sets are likely to contain most of the residents of the area, so the intersection of their user space is large. However, since the bank records the user’s revenue and expenditure behavior and credit rating, and the e-commerce retains the user’s browsing and purchasing history, their feature spaces are very different. Under this circumstance, one may apply Vertically Federated Learning, which is the process of aggregating these different features and computing the training loss and gradients in a privacy-preserving manner to build a model with data from both parties collaboratively. Federated Transfer Learning (FTL) refers to the federated machine learning technique designed for application scenarios where data sets have no significant overlap on neither the sample space nor the feature space. Consider two institutionsone is a bank located in China, and the other is an e-commerce company located in the United States. Due to geographical restrictions, the user groups of the two institutions have a small intersection. On the other hand, due to the different businesses, only a small portion of the feature space from both parties overlaps. In this case, transfer learning (Yang, et al. [7]) techniques can be applied to provide solutions for the entire sample and feature space under a federation. Specifically, a common representation between the two-feature space is learned using the limited common sample sets and later applied to obtain predictions for samples with only one- sided features. Authorized licensed use limited to University of Malta. Downloaded on December 24,2022 at 11:03:39 UTC from IEEE Xplore. Restrictions apply.

Copyright © 2021 IEEE. All rights reserved.

Download 0.56 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   10   ...   15




The database is protected by copyright ©ininet.org 2024
send message

    Main page