Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Deep Reinforcement Learning with Phasic Policy Gradient with Sample Reuse
oleh: LI Hailiang, WANG Li
| Format: | Article |
|---|---|
| Diterbitkan: | Editorial Office of Journal of Taiyuan University of Technology 2024-07-01 |
Deskripsi
Purposes The algoritihm of phasic policy gradient with sample reuse (SR-PPG) is proposed to address the problems of non-reuse of samples and low sample utilization in policybased deep reinforcement learning algorithms. Methods In the proposed algorithm, offline data are introduced on the basis of the phasic policy gradient (PPG), thus reducing the time cost of training and enabling the model to converge quickly. In this work, SR-PPG combines the stability advantages of theoretically supported on-policy algorithms with the sample efficiency of offpolicy algorithms to develop policy improvement guarantees applicable to off-policy settings and to link these bounds to the clipping mechanism used by PPG. Findings A series of theoretical and experimental demonstrations show that this algorithm provides better performance by effectively balancing the competing goals of stability and sample efficiency.