Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Incorporating Online Learning Into MCTS-Based Intention Progression
oleh: Chengcheng Song, Yuan Yao, Sixian Chan
Format: | Article |
---|---|
Diterbitkan: | IEEE 2024-01-01 |
Deskripsi
Agents have been applied to a wide variety of fields, including power systems and spacecraft. Belief-Desire-Intention (BDI) agents, as one of the most widely used and researched architectures, have the advantage of being able to pursue multiple goals in parallel. The problem of deciding “what to do” next at each of the agent’s deliberation cycle is therefore critical for BDI agents, which is defined as the intention progression problem (IPP). Among all existing approaches to IPP, the majority of approaches have overlooked the significance of runtime historical data, thereby limiting the adaptability and decision-making capabilities of agents. In this paper, we propose to incorporate online learning into the current state-of-the-art intention progression approach <inline-formula> <tex-math notation="LaTeX">$S_{A}$ </tex-math></inline-formula> to overcome the above limitations. This approach not only prevents <inline-formula> <tex-math notation="LaTeX">$S_{A}$ </tex-math></inline-formula> from consuming computational resources on ineffective and inefficient simulations, but also significantly improves the execution efficiency of the agent. Especially when dealing with large-scale problem domains, this improvement significantly enhances the planning capability of the agents. In particular, we have proposed the <inline-formula> <tex-math notation="LaTeX">$SA_{Q}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$SA_{L}$ </tex-math></inline-formula> schedulers, both of which can learn how to generate “reasonable” rollouts during the simulation phase of MCTS based on historical simulation data at run time. We compare the performance of our approach with the state-of-the-art <inline-formula> <tex-math notation="LaTeX">$S_{A}$ </tex-math></inline-formula> in a range of scenarios of increasing difficulty. The results demonstrate that our approaches outperform <inline-formula> <tex-math notation="LaTeX">$S_{A}$ </tex-math></inline-formula>, both in terms of the number of goals achieved and the computational overhead required.