Upper Bounds on the Performance of Discretisation in Reinforcement Learning

Michael Robin Mitchley

Abstract


Reinforcement learning is a machine learning framework whereby an agent learns to perform a task by maximising its total reward received for selecting actions in each state. The policy mapping states to actions that the agent learns is either represented explicitly, or implicitly through a value function. It is common in reinforcement learning to discretise a continuous state space using tile coding or binary features. We prove an upper bound on the performance of discretisation for direct policy representation or value function approximation.

Keywords


Reinforcement learning; Tile coding; Performance Bounds; Average Case Analysis

Full Text:

PDF


DOI: http://dx.doi.org/10.18489/sacj.v0i57.284

Copyright (c) 2015 Michael Robin Mitchley

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.