Terminus Group's Federated Learning Framework Enables Industries Train LMs More Efficiently

Terminus Group’s improved novel federated learning framework with an introduce of two key algorithms——the Online Laplace approximation & the multivariate Gaussian product mechanism——can efficiently tackle issues such as large aggregation errors and severe local forgetting in traditional federated learning, from a Bayesian posterior probability standpoint.

 

Whats the issues?

 

Traditionally, the federated learning allows multiple clients to collaboratively learn a globally shared model through cycles of model aggregation and local model training, without the need to share data. However, these approaches generally suffer from large aggregation errors and severe local forgetting, which is particularly problematic in heterogeneous data settings. 

 

How do we solve it?

 

The Online Laplace approximation is to approximate posteriors on both the client and server side. On the server side, the multivariate Gaussian product mechanism is employed to construct and maximize a global posterior, largely reducing the aggregation errors induced by large discrepancies between local models. On the client side, a prior loss that uses the global posterior probabilistic parameters delivered from the server is designed to guide the local training.

 

Binding such learning constraints from other clients enables our method to mitigate local forgetting.

 

As the new FL framework enhances data privacy protection, boosts model generalization ability in heterogeneous data settings, and increases algorithm efficiency by reducing computational costs, industries like healthcare, finance and manufacturing can use it to train specific LMs to their advantages.

 

For original paper info, please refer to:

A Bayesian Federated Learning Framework With Online Laplace Approximation | IEEE Journals & Magazine | IEEE Xplore

 


We use cookies to offer you a better browsing experience, analyze site traffic and personalize content. By using this site, you agree to our use of cookies. Visit our cookie policy to leamn more.