Diffusion-Reward Adversarial Imitation Learning

1National Taiwan University, 2National Yang Ming Chiao Tung University, 3NVIDIA

Abstract

Imitation learning aims to learn a policy from observing expert demonstrations without access to reward signals from environments. Generative adversarial imitation learning (GAIL) formulates imitation learning as adversarial learning, employing a generator policy learning to imitate expert behaviors and discriminator learning to distinguish the expert demonstrations from agent trajectories. Despite its encouraging results, GAIL training is often brittle and unstable. Inspired by the recent dominance of diffusion models in generative modeling, we propose DiffusionReward Adversarial Imitation Learning (DRAIL), which integrates a diffusion model into GAIL, aiming to yield more robust and smoother rewards for policy learning. Specifically, we propose a diffusion discriminative classifier to construct an enhanced discriminator, and design diffusion rewards based on the classifier's output for policy learning. Extensive experiments are conducted in navigation, manipulation, and locomotion, verifying DRAIL's effectiveness compared to prior imitation learning methods. Moreover, additional experimental results demonstrate the generalizability and data efficiency of DRAIL. Visualized learned reward functions of GAIL and DRAIL suggest that DRAIL can produce more robust and smoother rewards.

Framework Overview

drail model framework

Our proposed framework DRAIL incorporates a diffusion model into GAIL. (a) Our proposed diffusion discriminative classifier \(D_\phi\) learns to distinguish expert data \((\mathbf{s}_E, \mathbf{a}_E) \sim \tau_E\) from agent data \((\mathbf{s}_\pi, \mathbf{a}_pi) \sim \tau_i\) using a diffusion model. \(D_\phi\) is trained to predict a value closer to \(1\) when the input state-action pairs are sampled from expert demonstration and predict a value closer to \(0\) otherwise. (b) The policy \(\pi_\theta\) learns to maximize the diffusion reward \(r_\phi\) computed based on the output of \(D_\phi\) that takes the state-action pairs from the policy as input. The closer the policy resembles expert behaviors, the higher the rewards it can obtain.

Framework Overview

Environment Overview

(a) Maze: point-mass agent (green) within a 2D maze is trained to move from its initial position to reach the goal (red).

(b) FetchPush: The manipulation task is implemented with a 7-DoF Fetch robotics arm. FetchPush requires picking up or pushing an object to a target location (red).

(c) HandRotate: This dexterous manipulation task requires a Shadow Dexterous Hand to in-hand rotate a block to a target orientation.

(d) AntReach: This task trains a quadruped ant to reach a goal randomly positioned along the perimeter of a half-circle with a radius of 5 m.

(e) Walker: This locomotion task requires training a bipedal walker policy to achieve the highest possible walking speed while maintaining balance.

(f) CarRacing: This image-based racing game task requires driving a car to navigate a track as quickly as possible.

Learning Efficiency

We report success rates (Maze, FetchPush, HandRotate, AntReach) and return (Walker, CarRacing), evaluated over five random seeds. Our method DRAIL learns more stably, faster, and achieves higher or competitive performance compared to the best-performing baseline in all the tasks.

learning efficiency

Generalization Experiments in FetchPush

We present the performance of our proposed DRAIL and baselines in the FetchPush task, under varying levels of noise in initial states and goal locations. The evaluation spans three random seeds, and the training curve illustrates the success rate dynamics.

generalization experiment

Data Efficiency

We experiment learning with varying amounts of expert data in Walker and FetchPush. The results show that our proposed method DRAIL is more data efficient, i.e., can learn with less expert data, compared to other methods.

data efficiency

Reward Function Visualization

We present visualizations of the learned reward values by the discriminative classifier of GAIL and the diffusion discriminative classifier of our DRAIL. The target expert demonstration for imitation is depicted in (a), which is a discontinuous sine function. The reward distributions of GAIL and our DRAIL are illustrated in (b) and (c), respectively.

reward Visualization

BibTeX

@article{lai2024diffusion,
  title     = {Diffusion-Reward Adversarial Imitation Learning},
  author    = {Chun-Mao Lai, Hsiang-Chun Wang, Ping-Chun Hsieh, Yu-Chiang Frank Wang, Min-Hung Chen, Shao-Hua Sun},
  journal   = {arXiv preprint arXiv:2405.16194},
  year      = {2024},
}