You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please take a look at this line. It seems that all clients are using the same ML model for local training. In other words, there is no local model, but a global model which is sequentially trained on each client.
This can be verified by the following code snippet (I have tested it on flearn/trainers/fedavg.py).
csolns= [] # buffer for receiving client solutionslastc=Noneforidx, cinenumerate(active_clients.tolist()): # simply drop the slow devicesprint(i, idx)
iflastcisnotNone:
forjinrange(len(lastc)):
print('Is the parameters of the current client (before training) the same as the parameters of the previous client (after training)?: %s'% (c.get_params()[j] ==lastc[j]).all())
fromtimeimportsleepsleep(1)
else:
print('The first client.')
# communicate the latest modelc.set_params(self.latest_model)
# solve minimization locallysoln, stats=c.solve_inner(num_epochs=self.num_epochs, batch_size=self.batch_size)
lastc=c.get_params()
# gather solutions from clientcsolns.append(soln)
# track communication costself.metrics.update(rnd=i, cid=c.id, stats=stats)
# update modelsself.latest_model=self.aggregate(csolns)
In my opinion, this is not expected for federated learning.
The text was updated successfully, but these errors were encountered:
FedProx/flearn/trainers/fedbase.py
Line 17 in d2a4501
Please take a look at this line. It seems that all clients are using the same ML model for local training. In other words, there is no local model, but a global model which is sequentially trained on each client.
This can be verified by the following code snippet (I have tested it on flearn/trainers/fedavg.py).
In my opinion, this is not expected for federated learning.
The text was updated successfully, but these errors were encountered: