• Journal of Internet Computing and Services
    ISSN 2287 - 1136 (Online) / ISSN 1598 - 0170 (Print)
    https://jics.or.kr/

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity


Chayoung Kim, Seohee Park, Woosik Lee, Journal of Internet Computing and Services, Vol. 21, No. 5, pp. 1-7, Oct. 2020
10.7472/jksii.2020.21.5.1, Full Text:
Keywords: Deep Q-Network, Temporal Difference, Monte Carlo, Reinforcement Learning, Variation and Bias Balance

Abstract

Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[APA Style]
Kim, C., Park, S., & Lee, W. (2020). Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity. Journal of Internet Computing and Services, 21(5), 1-7. DOI: 10.7472/jksii.2020.21.5.1.

[IEEE Style]
C. Kim, S. Park, W. Lee, "Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity," Journal of Internet Computing and Services, vol. 21, no. 5, pp. 1-7, 2020. DOI: 10.7472/jksii.2020.21.5.1.

[ACM Style]
Chayoung Kim, Seohee Park, and Woosik Lee. 2020. Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity. Journal of Internet Computing and Services, 21, 5, (2020), 1-7. DOI: 10.7472/jksii.2020.21.5.1.