f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Gupta, Ni, Sikchi, Wang, Eysenbach and Lee
Conference on Robot Learning (CoRL), 2020
Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. We present an algorithm, f-IRL, based on the derived gradient. We show that f-IRL can be used to learn behaviors by hand-designing the target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, the recovered reward function can be used to solve downstream tasks efficiently, and we empirically show its utility on hard-to-explore tasks and for transferring behaviors across changes in dynamics.