Reinforcement Learning

Reincarnating Reinforcement Learning (RRL) is based on the premise of reusing prior computational work (e.g., prior learned agents) when training new agents or improving existing agents, even in the same environment. In RRL, new agents need not be trained from scratch, except for initial forays into new problems.

RRL as an alternative research workflow. Imagine a researcher who has trained an agent A1 for some time, but now wants to experiment with better architectures or algorithms. RRL provides the option of transferring the existing agent A1 to another agent and training this agent further, or simply fine-tuning A1

source: https://ai.googleblog.com/2022/11/beyond-tabula-rasa-reincarnating.html