The article titled, “Reward Criteria Impact on the Performance of Reinforcement Learning Agent for Autonomous Navigation” has been accepted for publication in Elsevier Applied Soft Computing Journal.

A. Dayal, L. R. Cenkeramaddi, and A. Jha, “Reward Criteria Impact on the Performance of Reinforcement Learning Agent for Autonomous Navigation” has been accepted for publication in Elsevier Applied Soft Computing Journal (2022).

Key words:Deep reinforcement learning,Reward criteria,Autonomous navigation,Machine learning and artificial intelligence

Abstract:In reinforcement learning, an agent takes action at every time step (follows a policy) in an environment to maximize the expected cumulative reward. Therefore, the shaping of a reward function plays a crucial role in an agent’s learning. Designing an optimal reward function is not a trivial task. In this article, we propose a reward criterion using which we develop different reward functions. The reward criterion chosen is based on the percentage of positive and negative rewards received by an agent. This reward criteria further gives rise to three different classes, ‘Balanced Class,’ ‘Skewed Positive Class,’ and ‘Skewed Negative Class.’ We train a Deep Q-Network agent on a point-goal based navigation task using the different reward classes. We also compare the performance of the proposed classes with a benchmark class. Based on the experiments, the skewed negative class outperforms the benchmark class by achieving very less variance. On the other hand, the benchmark class converges relatively faster than the skewed negative class.

Leave a Reply

Your email address will not be published. Required fields are marked *