Investigation of the performance of the Forward-Forward algorithm in a federated learning context.
- LAYER_EPOCHS: training epochs for a single layer
- BIAS_THRESHOLD: model hyper-parameter
- LEARN_RATE: initial learning rate for ADAM optimizer
- WEIGHT_DECAY: weight decay value
- MODEL_UNITS: list containing the number of the units of each layer
- MODEL_EPOCHS: training epochs for the client model
- NUM_ROUNDS: number of communication rounds
- NUM_CLIENTS: number of clients in the network
- C_RATE: fraction of selected clients taking part to each round
- NUM_REPEAT: training dataset repetitions
- BATCH_SIZE: local minibatch size for client update
- SHUFFLE_BUF: dataset shuffle buffer size to sort samples
- PREFETCH_BUF: size of prefetch buffer