Implementation of Custom Algorithms #99
-
Hello there! Just wondering if there are any tutorials/documentation regarding the implementation of Custom RL algorithms? I do find the logging of the training metrics in weights and biases to be very nice, although i do still need Thank you very much in advance for your time and help! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi! There are a bunch of tutorials out there, if you just want to implement your own training algorithm but still use the readily implemented Note: it requires a good mastery of deep RL. If you have never Implemented deep RL algorithms before, I would start with simple tutorials about how to implement, e.g., Q-learning in the "frozen lake" environment, DQN in Atari, or REINFORCE/DDPG/SAC/PPO in the classic MuJoCo continuous control tasks. About the replays, there is an option for saving them in TrackMania, yes :) You can enable that in |
Beta Was this translation helpful? Give feedback.
Hi!
There are a bunch of tutorials out there, if you just want to implement your own training algorithm but still use the readily implemented
tmrl
gym environment for TrackMania, I recommend looking into the competition tutorial script (it is aworkingbugged script in which the comments explain how to customize the TrackMania pipeline using thetmrl
API).Note: it requires a good mastery of deep RL. If you have never Implemented deep RL algorithms before, I would start with simple tutorials about how to implement, e.g., Q-learning in the "frozen lake" environment, DQN in Atari, or REINFORCE/DDPG/SAC/PPO in the classic MuJoCo continuous control tasks.
About the replays, there is an option…