Codebase for the Planning-Transformer advanced project.
- Make sure python3 is installed
- Conda env create -f conda_env-cuda.yml
- Install MuJoCo by following these steps
- Download the MuJoCo version 2.1 binaries for Linux or OSX
- Extract the downloaded mujoco210 directory into ~/.mujoco/mujoco210 (extract all and rename parent folder to .mujuco)
- Add mujuco to environment variables by running
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/jclinton/.mujoco/mujoco210/bin
in terminal - If on WSL/Ubuntu run
sudo apt-get update && sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf libsm6 qt6-base-dev
- Make sure gcc is installed. Otherwise, install with
sudo apt update && sudo apt install build-essential -y
- If you get any errors about QT try
2.
pip uninstall PyQt5 opencv-python & pip install opencv-python ==4.9.0.80
- Conda activate planning-transformer
- Cd to the Planning-Transformer directory then run
export PYTHONPATH="$(pwd):$PYTHONPATH"
in the terminal - (optional) install CALVIN
6. Download
calvin.gz
(dataset) following the instructions at https://github.com/clvrai/skimo and place it in theenvs
directory. 7. Convertinstall.sh
to a unix file withsudo apt-get install dos2unix && dos2unix install.sh
8.cd envs/calvin && bash install.sh
- (optional) install block_pushing, by following instructions at https://github.com/real-stanford/diffusion_policy?tab=readme-ov-file#%EF%B8%8F-reproducing-simulation-benchmark-results
If using cuda run the following :
- pip3 uninstall torch torchvision -y && pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
If you get an error about botocore do:
- pip uninstall botocore boto3 s3fs aiobotocore
- pip install boto3 botocore s3fs aiobotocore
Unfortunately this doesn't work on WSL2 but it should work on Linux
To learn how to set it up to it properly go here: https://pytorch.org/rl/main/reference/generated/knowledge_base/MUJOCO_INSTALLATION.html for more help
There is a bug which prevents Mujoco building the gpu environment by default, which you can fix with: openai/mujoco-py#493
add this to batch file
export MUJOCO_PY_FORCE_CPU=1
export LIBGL_ALWAYS_SOFTWARE=1
I found this to be much faster than GPU rendering for some reason.
Kitchen env needs to be manually edited to make it render.
- In "site-packages\d4rl\kitchen\kitchen_envs.py", comment out the render function (lines #89-91) , so that it actually renders video.
- Then in "site-packages\d4rl\kitchen\adept_envs\franka\kitchen_multitask_v0.py" comment out line #114, so it doesn't double render.
- To test the Planning-Transformer on the AntMaze environment run:
- (if using cpu)
python3 models/PDT.py --config configs/umaze_v2.yaml
- (if using cuda)
python3 models/PDT.py --config configs/umaze_v2_cuda.yaml
- (if using cpu)
- You will be asked by wandb to create a W&B account or to use an existing W&B account, following their instructions to link the run to your account.