Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Building Energy Consumption Prediction Using a Deep-Forest-Based DQN Method
oleh: Qiming Fu, Ke Li, Jianping Chen, Junqi Wang, You Lu, Yunzhe Wang
| Format: | Article |
|---|---|
| Diterbitkan: | MDPI AG 2022-01-01 |
Deskripsi
When deep reinforcement learning (DRL) methods are applied in energy consumption prediction, performance is usually improved at the cost of the increasing computation time. Specifically, the deep deterministic policy gradient (DDPG) method can achieve higher prediction accuracy than deep Q-network (DQN), but it requires more computing resources and computation time. In this paper, we proposed a deep-forest-based DQN (DF–DQN) method, which can obtain higher prediction accuracy than DDPG and take less computation time than DQN. Firstly, the original action space is replaced with the shrunken action space to efficiently find the optimal action. Secondly, deep forest (DF) is introduced to map the shrunken action space to a single sub-action space. This process can determine the specific meaning of each action in the shrunken action space to ensure the convergence of DF–DQN. Thirdly, state class probabilities obtained by DF are employed to construct new states by considering the probabilistic process of shrinking the original action space. The experimental results show that the DF–DQN method with 15 state classes outperforms other methods and takes less computation time than DRL methods. MAE, MAPE, and RMSE are decreased by 5.5%, 7.3%, and 8.9% respectively, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msup><mi>R</mi><mn>2</mn></msup></mrow></semantics></math></inline-formula> is increased by 0.3% compared to the DDPG method.