Code as Policies:
Language Model Programs for Embodied Control

Abstract

Large language models (LLMs) trained on code-completion have been shown to be capable of synthesizing simple Python programs from docstrings [1]. We find that these codewriting LLMs can be re-purposed to write robot policy code, given natural language commands. Specifically, policy code can express functions or feedback loops that process perception outputs (e.g.,from object detectors [2], [3]) and parameterize control primitive APIs. When provided as input several example language commands (formatted as comments) followed by corresponding policy code (via few-shot prompting), LLMs can take in new commands and autonomously re-compose API calls to generate new policy code respectively. By chaining classic logic structures and referencing third-party libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) generalize to new instructions, and (iii) prescribe precise values (e.g., velocities) to ambiguous descriptions (“faster”) depending on context (i.e., behavioral commonsense). This paper presents code as policies: a robot-centric formalization of language model generated programs (LMPs) that can represent reactive policies (e.g., impedance controllers), as well as waypoint-based policies (vision-based pick and place, trajectory-based control), demonstrated across multiple real robot platforms. Central to our approach is prompting hierarchical code-gen (recursively defining undefined functions), which can write more complex code and also improves state-of-the-art to solve 39.8% of problems on the HumanEval [1] benchmark. Code and videos are available at https://code-as-policies.github.io

Experiment Videos and Generated Code

Videos have sound that showcase voice and speech-based robot interface.

Long pauses between commands and responses are mostly caused by OpenAI API query times and rate limiting.

We also give links to prompts used in each domain. Different prompts specialize the LLM to perform different functions, and they're composed together by the generated code via function calls that use natural language arguments.

Tabletop Manipulation: Blocks

Choose a command:

Prompts (same for all tabletop manipulation tasks):
High-Level UI | Parse Object Names | Parse Positions | Parse Questions | Function Generation

Tabletop Manipulation: Blocks and Bowls

Choose a command:

Prompts (same for all tabletop manipulation tasks):
High-Level UI | Parse Object Names | Parse Positions | Parse Questions | Function Generation

Tabletop Manipulation: Fruits, Bottles, and Plates

Choose a command:

Prompts (same for all tabletop manipulation tasks):
High-Level UI | Parse Object Names | Parse Positions | Parse Questions | Function Generation

Whiteboard Drawing

Choose a command:

Mobile Robot: Navigation

Choose a command:

Prompts (same for all mobile robot tasks):
High-Level UI | Parse Object Names | Parse Positions | Transform Points | Function Generation

Mobile Robot: Manipulation

Choose a command:

Prompts (same for all mobile robot tasks):
High-Level UI | Parse Object Names | Parse Positions | Transform Points | Function Generation

Citation

[arxiv version]

Acknowledgements

Special thanks to Vikas Sindhwani, Vincent Vanhoucke for helpful feedback on writing, Chad Boodoo for operations and hardware support.