-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add digit locomotion examples #1892
base: main
Are you sure you want to change the base?
Conversation
This commit introduces comprehensive configuration files for the Agility Digit robot in locomotion environments, including: - Robot articulation configuration in `agility.py` - Rough terrain locomotion environment configuration - Locomotion and manipulation environment configurations - Reward and observation definitions - PPO agent configurations for different scenarios (rough terrain, flat ground, locomotion-manipulation) The new configurations support various locomotion tasks with the Digit robot, including velocity tracking, terrain navigation, and manipulation scenarios.
This commit refines the Digit robot configuration for flat ground locomotion: - Modified registration to use a new `DigitFlatPPORunnerCfg` - Updated flat environment configuration to remove height scanner and terrain curriculum - Replaced custom feet air time reward with a standard biped air time reward - Adjusted reward weights and policy network architecture for flat ground scenario
source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/__init__.py
Show resolved
Hide resolved
...tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/agents/rsl_rl_ppo_cfg.py
Outdated
Show resolved
Hide resolved
…velocity/config/digit/__init__.py Signed-off-by: lgulich <[email protected]>
…velocity/config/digit/agents/rsl_rl_ppo_cfg.py Signed-off-by: lgulich <[email protected]>
...ab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/loco_manip_env_cfg.py
Outdated
Show resolved
Hide resolved
...isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/flat_env_cfg.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
return torch.sum(torch.abs(offset), dim=1) * (torch.norm(command[:, :2], dim=1) < 0.06) | ||
|
||
|
||
def no_jumps(env, sensor_cfg: SceneEntityCfg) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a MDP term called: undesired_contacts which does the same operation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this the opposite? The no_jumps
reward should encourage that at least 1 foot is on the floor. The undesired_contacts
reward would encourage that no foot is on the floor, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay. Then I'd say we should make another term in the mdp.rewards
called desired_contacts
? That might be useful for some other tasks in the future too
source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/mdp/rewards.py
Outdated
Show resolved
Hide resolved
source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/mdp/rewards.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
Added a few comments. Would be good to also supplement the docs with images from each task :) |
3159e03
to
d6554c5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review. Addressed your comments @Mayankm96 but still have to add images to the docs.
...isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/flat_env_cfg.py
Outdated
Show resolved
Hide resolved
...ab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/loco_manip_env_cfg.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
...saaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config/digit/rough_env_cfg.py
Outdated
Show resolved
Hide resolved
source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/mdp/rewards.py
Outdated
Show resolved
Hide resolved
source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/mdp/rewards.py
Outdated
Show resolved
Hide resolved
return torch.sum(torch.abs(offset), dim=1) * (torch.norm(command[:, :2], dim=1) < 0.06) | ||
|
||
|
||
def no_jumps(env, sensor_cfg: SceneEntityCfg) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this the opposite? The no_jumps
reward should encourage that at least 1 foot is on the floor. The undesired_contacts
reward would encourage that no foot is on the floor, no?
Description
Add an example to train a locomotion and loco-manipulation controller for digit. This also serves as an example on how to train a robot with closed loops.
Type of change
Screenshots
Checklist
pre-commit
checks with./isaaclab.sh --format
config/extension.toml
fileCONTRIBUTORS.md
or my name already exists there