Policy Adaptation from Foundation Model Feedback

Yuying Ge1, Annabella Macaluso2, Li Erran Li3, Ping Luo1, Xiaolong Wang2
1University of Hong Kong, 2University of California, San Diego 3AWS AI, Amazon




We propose policy adaptation from foundation model feedback (PAFF) to adapt a language-conditioned policy
across object composition, tasks and environments including from simulation to the real world (shown below).

Abstract

Recent progress on vision-language foundation models have brought significant advancement to building general-purpose robots. By using the pre-trained models to encode the scene and instructions as inputs for decision making, the instruction-conditioned policy can generalize across different objects and tasks. While this is encouraging, the policy still fails in most cases given an unseen task or environment. In this work, we propose Policy Adaptation from Foundation model Feedback (PAFF). When deploying the trained policy to a new task or a new environment, we first let the policy play with randomly generated instructions to record the demonstrations. While the execution could be wrong, we can use the pre-trained foundation models to provide feedback by relabeling the demonstrations. This automatically provides new pairs of demonstration-instruction data for policy fine-tuning. We evaluate our method on a broad range of experiments with the focus on generalization on unseen objects, unseen tasks, unseen environments, and sim-to-real transfer. We show PAFF improves baselines by a large margin in all cases.

Video

Method


Interpolate start reference image.

The pipeline of policy adaptation from foundation model feedback (PAFF). When we adapt a trained policy to a new task, we first let the robot play, that is, the policy continuously predicts and performs actions given a series of randomly generated language instructions. We record these demonstrations including the visual observations and the model's actions. After that, we let the model relabel, that is, the vision-language foundation model relabels the demonstrations by retrieving the language instructions given the recorded visual observations. We then fine-tune the policy with the accurate paired observations and instructions, and the corresponding actions, which are collected in an automatic way.

Sim-to-real Transfer

We train a policy on simulation data and adapt it to the real world.


Interpolate start reference image.


Comparisn with Baseline





Compositional Generalization

We train a policy to pack objects of different shapes in the brown box, and put blocks of different colors in the bowls of different colors, and adapt it to put objects of different shapes in the bowls of different colors.


Interpolate start reference image.


Recorded Demonstration



Comparisn with Baseline





Out-of-distribution (Unseen Objects)

We train a policy to pack certain objects in the brown box, and adapt it to pack unseen objects.


Interpolate start reference image.


Recorded Demonstration



Comparisn with Baseline





Out-of-distribution (Unseen Environment)

We train a policy on seen environments and adapt it to a new environment with different textures and differently positioned static elements such as the sliding door, the drawer and the light button.


Interpolate start reference image.


Recorded Demonstration



Comparisn with Baseline