1、RLx2:Training a Sparse DeepReinforcement Learning Model from ScratchLongbo HuangInstitute for Interdisciplinary Information SciencesTsinghua University1RL China 20232Deep Reinforcement Learning(DRL)3AlphaGo-Zero4 TPUs40 daysOpenAI Five256 GPUs180 daysRainbow DQNsingle GPU one weekHigh training cost
2、Resources-limited scenariosDRL Training is Costly106 W20 W=5000 x4ParadigmPrior worksin SLPrior worksin RLSave inference cost?Save training cost?Sparsityin RLIterative pruningHan et al.(2015;2016);Srinivas et al.(2017);Zhu&Gupta(2018);Hu et al.(2016);Guo et al.(2016);Dong et al.(2017);Molchanov et a
3、l.(2019b);Louizos et al.(2018);Tartaglione et al.(2018);Molchanov et al.(2017);Schwarz et al.(2021)Livne&Cohen(2020);Yu et al.(2020);Lee et al.(2021);Vischer et al.(2022)YesNo70%99%Dynamicsparse trainingBellec et al.(2017);Mocanu et al.(2018);Mostafa&Wang(2019);Dettmers&Zettlemoyer(2019);Evci et al.
4、(2020)Sokar et al.(2021);Graesser et al.(2022)YesYes50%90%Prior Works on lightweight DL5TrainingPruningMuch less parameters&Matchable performance!Iterative Pruning6TrainingEvolvingExplore new connections based on gradientsDynamic Sparse Training7ParadigmPrior worksin SLPrior worksin RLSave inference
5、 cost?Save training cost?Sparsityin RLIterative pruningHan et al.(2015;2016);Srinivas et al.(2017);Zhu&Gupta(2018);Hu et al.(2016);Guo et al.(2016);Dong et al.(2017);Molchanov et al.(2019b);Louizos et al.(2018);Tartaglione et al.(2018);Molchanov et al.(2017);Schwarz et al.(2021)Livne&Cohen(2020);Yu
6、et al.(2020);Lee et al.(2021);Vischer et al.(2022)YesNo70%99%Dynamicsparse trainingBellec et al.(2017);Mocanu et al.(2018);Mostafa&Wang(2019);Dettmers&Zettlemoyer(2019);Evci et al.(2020)Sokar et al.(2021);Graesser et al.(2022)YesYes50%90%Prior Works on lightweight DL8Can an efficient DRL agent be tr