LG-Fedavg
贡献(Abstract)
- local representation learning on each device: for learning useful and compact features from raw data (看不懂,这一点在论文里没有体现)
- 减小了服务器和客户端之间的通信开销
- for fair representation learning (没仔细看)
- 通过个性化模型解决non-i.i.d问题 (参考)
core idea
client 先学习本地的local representation(对应浅层网络?因为浅层网络不参与聚合),之后global model 操作 higher-level representation (指上传给服务器的深层网络)
The core idea of our method is to augment federated learning with local representation learning on each device before a global model operating on higher-level representation is trained on the data(now as representations rather than raw data) from all device.
由此对应:

文章探讨了在联邦学习中,通过在本地设备上进行浅层网络和深层网络的特征学习(localrepresentation),减少服务器与客户端间的通信,以及如何通过个性化模型解决非独立同分布问题。方法的核心是先在客户端训练本地浅层网络,再上传深层网络进行全局模型的更新,以此实现模型的同步和正则化作用。
最低0.47元/天 解锁文章






