You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the simulated sawyer robotic environments used in the paper, see this codebase.
Dependencies
This code is based off of the rllab code repository and can be installed in the same way (see below). This codebase is not necessarily backwards compatible with rllab.
The GMPS code uses the TensorFlow rllab version, so be sure to install TensorFlow v1.0+.
Usage
Sample train and test scripts can be found in launchers/
Contact
To ask questions or report issues, please open an issue on the issues tracker.
rllab
rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:
rllab is fully compatible with OpenAI Gym. See here for instructions and examples.
rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.
rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.
The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.
rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.