Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/robot fixes #1075

Open
wants to merge 41 commits into
base: og-develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
adf7999
add ability to specify sensors to include / exclude, remove enforcing…
cremebrule Dec 20, 2024
08d811d
update configs
cremebrule Dec 20, 2024
341a5a2
add reverse_preprocess_command to all curobo joint pos execution
ChengshuLi Dec 20, 2024
c1f5de6
fix minor bug for is_grasping in manipulation_robot
ChengshuLi Dec 20, 2024
7e495bf
Merge branch 'og-develop' into feat/robot-fixes
cremebrule Jan 8, 2025
bc46733
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 8, 2025
f18f33e
Merge branch 'og-develop' into feat/robot-fixes
cremebrule Jan 16, 2025
cfddd5b
add updated compute function for joint controller
cremebrule Jan 18, 2025
73d7f79
fix data wrapper bugs
cremebrule Jan 18, 2025
33f4574
fix deprecated API bug
cremebrule Jan 18, 2025
97903ac
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 18, 2025
d49b5f9
add ability to process obs in data wrapper env, fix bug where objects…
cremebrule Jan 21, 2025
efe7808
update data wrapper for recording videos
cremebrule Jan 22, 2025
f7aa37a
add ability to wrap additional envs for data wrapper; standardize wra…
cremebrule Jan 23, 2025
a1ae1b9
programmatically infer grasping points
cremebrule Jan 24, 2025
91d4233
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 24, 2025
2acd530
prune outdated code
cremebrule Jan 27, 2025
86aa45b
use_impedances
cremebrule Jan 27, 2025
b1dea6e
maintain original default arm link convention during curobo config ge…
cremebrule Jan 27, 2025
d7810ff
Merge remote-tracking branch 'origin/feat/robot-fixes' into feat/robo…
cremebrule Jan 27, 2025
4f22e22
Merge branch 'og-develop' into feat/robot-fixes
cremebrule Jan 27, 2025
af67597
fix grasping type inference for MultiFingerGripperController
cremebrule Jan 28, 2025
08fc270
retune test_controllers.py
cremebrule Jan 28, 2025
85d4fb1
retune ground truth camera pose for test robot states
cremebrule Jan 28, 2025
d13f10a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 28, 2025
fb91b98
make sure exclude and include sensor names are mutually exclusive
cremebrule Jan 28, 2025
d05a1b3
update test_controllers to improve determinacy
cremebrule Jan 29, 2025
99f9174
make waypoint tolerance more strict, always run every waypoint as an …
cremebrule Jan 29, 2025
6301552
fix leaking tensors in control filter
cremebrule Jan 29, 2025
a679de3
update grasping_planning_utils.py with new robot finger properties
cremebrule Jan 29, 2025
bbff9d9
Merge remote-tracking branch 'origin/feat/robot-fixes' into feat/robo…
cremebrule Jan 29, 2025
62c1a64
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 29, 2025
40e9a27
update docs and prune outdated robot properties
cremebrule Jan 29, 2025
595ab3f
make AG default grasping point proportions a macro, prune outdated, h…
cremebrule Jan 29, 2025
fc85ae6
Merge remote-tracking branch 'origin/feat/robot-fixes' into feat/robo…
cremebrule Jan 29, 2025
455298c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 29, 2025
29428bc
primitive sticky grasping top down move hand use stop_on_ag, instead …
cremebrule Jan 30, 2025
b017515
Merge branch 'feat/robot-fixes' of github.com:StanfordVL/OmniGibson i…
cremebrule Jan 30, 2025
b3bc5c6
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 30, 2025
7f57a3e
fix outdated franka eef name, tune ik / osc keyboard control gains
cremebrule Jan 30, 2025
97a1353
Merge remote-tracking branch 'origin/feat/robot-fixes' into feat/robo…
cremebrule Jan 30, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 0 additions & 22 deletions docs/tutorials/custom_robot_import.md
Original file line number Diff line number Diff line change
Expand Up @@ -494,28 +494,6 @@ Now that we have the USD file for the robot, let's write our own robot class. Fo
def wheel_axle_length(self):
return 0.330

@property
def finger_lengths(self):
return {self.default_arm: 0.04}

@property
def assisted_grasp_start_points(self):
return {
self.default_arm: [
GraspingPoint(link_name="link_gripper_finger_right", position=th.tensor([0.013, 0.0, 0.01])),
GraspingPoint(link_name="link_gripper_finger_right", position=th.tensor([-0.01, 0.0, 0.009])),
]
}

@property
def assisted_grasp_end_points(self):
return {
self.default_arm: [
GraspingPoint(link_name="link_gripper_finger_left", position=th.tensor([0.013, 0.0, 0.01])),
GraspingPoint(link_name="link_gripper_finger_left", position=th.tensor([-0.01, 0.0, 0.009])),
]
}

@property
def disabled_collision_pairs(self):
return [
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/demo_collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ playback_env = DataPlaybackWrapper.create_from_hdf5(
)

# Playback the entire dataset and record observations
playback_env.playback_dataset(record=True)
playback_env.playback_dataset(record_data=True)

# Save the recorded playback data
playback_env.save_data()
Expand Down
43 changes: 29 additions & 14 deletions omnigibson/action_primitives/starter_semantic_action_primitives.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
m.LOW_PRECISION_DIST_THRESHOLD = 0.1
m.LOW_PRECISION_ANGLE_THRESHOLD = 0.2

m.JOINT_POS_DIFF_THRESHOLD = 0.01
m.JOINT_POS_DIFF_THRESHOLD = 0.005
m.LOW_PRECISION_JOINT_POS_DIFF_THRESHOLD = 0.05
m.JOINT_CONTROL_MIN_ACTION = 0.0
m.MAX_ALLOWED_JOINT_ERROR_FOR_LINEAR_MOTION = math.radians(45)
Expand Down Expand Up @@ -506,7 +506,10 @@ def _sample_grasp_pose(self, obj):
# Identity quaternion for top-down grasping (x-forward, y-right, z-down)
approach_dir = T.quat2mat(grasp_quat) @ th.tensor([0.0, 0.0, -1.0])

pregrasp_offset = self.robot.finger_lengths[self.arm] / 2.0 + m.GRASP_APPROACH_DISTANCE
avg_finger_offset = th.mean(
th.tensor([length for length in self.robot.eef_to_fingertip_lengths[self.arm].values()])
)
pregrasp_offset = avg_finger_offset + m.GRASP_APPROACH_DISTANCE

pregrasp_pos = grasp_pos - approach_dir * pregrasp_offset

Expand Down Expand Up @@ -581,7 +584,7 @@ def _grasp(self, obj):
# By default, it's NOT the z-axis of the world frame unless `project_pose_to_goal_frame=False` is set in curobo.
# For sticky grasping, we also need to ignore the object during motion planning because the fingers are already closed.
yield from self._move_hand(
grasp_pose, motion_constraint=[1, 1, 1, 1, 1, 0], stop_on_contact=True, ignore_objects=[obj]
grasp_pose, motion_constraint=[1, 1, 1, 1, 1, 0], stop_on_ag=True, ignore_objects=[obj]
)
elif self.robot.grasping_mode == "assisted":
indented_print("Assisted grasping: approach")
Expand Down Expand Up @@ -853,6 +856,7 @@ def _move_hand(
self,
target_pose,
stop_on_contact=False,
stop_on_ag=False,
motion_constraint=None,
low_precision=False,
lock_auxiliary_arm=False,
Expand All @@ -864,6 +868,7 @@ def _move_hand(
Args:
target_pose (Tuple[th.tensor, th.tensor]): target pose to reach for the default end-effector in the world frame
stop_on_contact (bool): Whether to stop executing motion plan if contact is detected
stop_on_ag (bool): Whether to stop executing motion plan if assisted grasping is activated
motion_constraint (MotionConstraint): Motion constraint for the motion
low_precision (bool): Whether to use low precision for the motion
lock_auxiliary_arm (bool): Whether to lock the other arm in place
Expand Down Expand Up @@ -906,7 +911,9 @@ def _move_hand(
)

indented_print(f"Plan has {len(q_traj)} steps")
yield from self._execute_motion_plan(q_traj, stop_on_contact=stop_on_contact, low_precision=low_precision)
yield from self._execute_motion_plan(
q_traj, stop_on_contact=stop_on_contact, stop_on_ag=stop_on_ag, low_precision=low_precision
)

def _plan_joint_motion(
self,
Expand Down Expand Up @@ -972,7 +979,13 @@ def _plan_joint_motion(
return q_traj

def _execute_motion_plan(
self, q_traj, stop_on_contact=False, ignore_failure=False, low_precision=False, ignore_physics=False
self,
q_traj,
stop_on_contact=False,
stop_on_ag=False,
ignore_failure=False,
low_precision=False,
ignore_physics=False,
):
for i, joint_pos in enumerate(q_traj):
# indented_print(f"Executing motion plan step {i + 1}/{len(q_traj)}")
Expand All @@ -992,6 +1005,12 @@ def _execute_motion_plan(
articulation_control_idx = th.cat(articulation_control_idx)
for j in range(m.MAX_STEPS_FOR_JOINT_MOTION):
# indented_print(f"Step {j + 1}/{m.MAX_STEPS_FOR_JOINT_MOTION}")

# We need to call @q_to_action for every step because as the robot base moves, the same base joint_pos will be
# converted to different actions, since HolonomicBaseJointController accepts an action in the robot local frame.
action = self.robot.q_to_action(joint_pos)
yield self._postprocess_action(action)

current_joint_pos = self.robot.get_joint_positions()
joint_pos_diff = joint_pos - current_joint_pos
base_joint_diff = joint_pos_diff[self.robot.base_control_idx]
Expand All @@ -1014,14 +1033,10 @@ def _execute_motion_plan(

collision_detected = detect_robot_collision_in_sim(self.robot)
if stop_on_contact and collision_detected:
indented_print(f"Collision detected at step {i + 1}/{len(q_traj)}")
return

# We need to call @q_to_action for every step because as the robot base moves, the same base joint_pos will be
# converted to different actions, since HolonomicBaseJointController accepts an action in the robot local frame.
action = self.robot.q_to_action(joint_pos)

yield self._postprocess_action(action)
if stop_on_ag and self._get_obj_in_hand() is not None:
return

if not ignore_failure:
if not base_target_reached:
Expand Down Expand Up @@ -1334,13 +1349,13 @@ def _move_fingers_to_limit(self, limit_type):
target_joint_positions = self._get_joint_position_with_fingers_at_limit(limit_type)
action = self.robot.q_to_action(target_joint_positions)
for _ in range(m.MAX_STEPS_FOR_GRASP_OR_RELEASE):
current_joint_positinos = self.robot.get_joint_positions()
if th.allclose(current_joint_positinos, target_joint_positions, atol=0.005):
yield self._postprocess_action(action)
current_joint_positions = self.robot.get_joint_positions()
if th.allclose(current_joint_positions, target_joint_positions, atol=m.JOINT_POS_DIFF_THRESHOLD):
break
elif limit_type == "lower" and self._get_obj_in_hand() is not None:
# If we are grasping an object, we should stop when object is detected in hand
break
yield self._postprocess_action(action)

def _execute_grasp(self):
"""
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/fetch_behavior.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ scene:
robots:
- type: Fetch
obs_modalities: [scan, rgb, depth]
include_sensor_names: null
exclude_sensor_names: null
scale: 1.0
self_collision: false
action_normalize: true
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/r1_primitives.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ scene:
robots:
- type: R1
obs_modalities: [rgb]
include_sensor_names: null
exclude_sensor_names: null
scale: 1.0
self_collisions: true
action_normalize: false
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/robots/fetch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ robot:
self_collision: true
grasping_mode: physical
default_arm_pose: vertical
include_sensor_names: null
exclude_sensor_names: null
sensor_config:
VisionSensor:
sensor_kwargs:
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/robots/freight.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ robot:
base_name: null
scale: 1.0
self_collision: false
include_sensor_names: null
exclude_sensor_names: null
sensor_config:
VisionSensor:
sensor_kwargs:
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/robots/husky.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ robot:
base_name: null
scale: 1.0
self_collision: false
include_sensor_names: null
exclude_sensor_names: null
sensor_config:
VisionSensor:
sensor_kwargs:
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/robots/locobot.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ robot:
base_name: null
scale: 1.0
self_collision: false
include_sensor_names: null
exclude_sensor_names: null
sensor_config:
VisionSensor:
sensor_kwargs:
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/robots/turtlebot.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ robot:
base_name: null
scale: 1.0
self_collision: false
include_sensor_names: null
exclude_sensor_names: null
sensor_config:
VisionSensor:
sensor_kwargs:
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/tiago_primitives.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ scene:
robots:
- type: Tiago
obs_modalities: [rgb]
include_sensor_names: null
exclude_sensor_names: null
scale: 1.0
self_collisions: true
action_normalize: false
Expand Down
2 changes: 2 additions & 0 deletions omnigibson/configs/turtlebot_nav.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ scene:
robots:
- type: Turtlebot
obs_modalities: [scan, rgb, depth]
include_sensor_names: null
exclude_sensor_names: null
scale: 1.0
self_collision: false
action_normalize: true
Expand Down
2 changes: 1 addition & 1 deletion omnigibson/controllers/ik_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def __init__(
pos_kp=None,
pos_damping_ratio=None,
vel_kp=None,
use_impedances=True,
use_impedances=False,
mode="pose_delta_ori",
smoothing_filter_size=None,
workspace_pose_limiter=None,
Expand Down
10 changes: 5 additions & 5 deletions omnigibson/controllers/joint_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
)
from omnigibson.macros import create_module_macros
from omnigibson.utils.backend_utils import _compute_backend as cb
from omnigibson.utils.backend_utils import _ComputeBackend, _ComputeNumpyBackend, _ComputeTorchBackend
from omnigibson.utils.backend_utils import add_compute_function
from omnigibson.utils.python_utils import assert_valid_key
from omnigibson.utils.ui_utils import create_module_logger

Expand Down Expand Up @@ -223,7 +223,7 @@ def compute_control(self, goal_dict, control_dict):
else: # effort
u = target

u = cb.compute_joint_torques(u, control_dict["mass_matrix"], self.dof_idx)
u = cb.get_custom_method("compute_joint_torques")(u, control_dict["mass_matrix"], self.dof_idx)

# Add gravity compensation
if self._use_gravity_compensation:
Expand Down Expand Up @@ -341,6 +341,6 @@ def _compute_joint_torques_numpy(


# Set these as part of the backend values
setattr(_ComputeBackend, "compute_joint_torques", None)
setattr(_ComputeTorchBackend, "compute_joint_torques", _compute_joint_torques_torch)
setattr(_ComputeNumpyBackend, "compute_joint_torques", _compute_joint_torques_numpy)
add_compute_function(
name="compute_joint_torques", np_function=_compute_joint_torques_numpy, th_function=_compute_joint_torques_torch
)
11 changes: 9 additions & 2 deletions omnigibson/controllers/multi_finger_gripper_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,8 +126,15 @@ def _generate_default_command_output_limits(self):
if self._mode == "binary":
command_output_limits = (-1.0, 1.0)
# If we're in smoothing mode, output limits should be the average of the independent limits
elif self._mode == "smoothing":
command_output_limits = cb.mean(command_output_limits[0]), cb.mean(command_output_limits[1])
elif self._mode == "smooth":
command_output_limits = (
cb.mean(command_output_limits[0]),
cb.mean(command_output_limits[1]),
)
elif self._mode == "independent":
pass
else:
raise ValueError(f"Invalid mode {self._mode}")

return command_output_limits

Expand Down
Loading
Loading