Publication Type : Conference Paper
Publisher : IEEE
Source : 2024 International Conference on Emerging Smart Computing and Informatics (ESCI)
Url : https://doi.org/10.1109/esci59607.2024.10497405
Campus : Bengaluru
School : School of Computing
Year : 2024
Abstract : This study presents a comparative analysis of the Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) reinforcement learning algorithms in the context of stock trading, focusing on historical stock price data of S&P 500 stock index. The primary objective is to evaluate the effectiveness of these RL algorithms in making trading decisions and achieving financial performance relative to a passive buy-And-hold strategy. The assessment revolves around key metrics, with a special emphasis on the buy, hold, and sell indications as essential signals for trading actions. Results indicate that DQN outperforms DDPG, showcasing a smaller total loss during the evaluation period. This suggests that DQN may offer superior capabilities in limiting financial losses and making effective trading decisions compared to DDPG, providing valuable insights into algorithm selection for dynamic trading environments. It is crucial to recognize the significance of buy, hold, and sell indications as key elements in assessing the performance of RL algorithms in stock trading scenarios. © 2024 IEEE.
Cite this Research Publication : Nithin Kodurupaka, Basavadeepthi H M, Shiva Teja Pecheti, Amudha J, Deep Reinforcement Learning in Stock Trading: Evaluating DDPG and DQN Strategies, 2024 International Conference on Emerging Smart Computing and Informatics (ESCI), IEEE, 2024, https://doi.org/10.1109/esci59607.2024.10497405