You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Command: python -m igibson.examples.learning.demo_replaying_examples
Results: I can see the demo, but the the motion of the robot is different from ground truth environment. I believe I faced a similar (or even worse) issue as StanfordVL/iGibson#161 (comment). I know it may because I am not running demos on a Windows machine, but it also shows the below messages and I don't know what's going on. I simply cannot find the vr-demo-collection branch at all. Also, the version 2.0.6 is suspicious, I don't know which version is correct for reproducing the demo.
********************************************************************************
WARNING:igibson.render.mesh_renderer.mesh_renderer_settings:WARN: Darwin does not support optimized renderer, automatically disabling
Warning, difference in git commits for repo: iGibson. This may impact deterministic replay
Logged git info:
{ 'branch_name': 'vr-demo-collection',
'code_diff': 'diff --git a/igibson/objects/multi_object_wrappers.py '
'b/igibson/objects/multi_object_wrappers.py\n'
'index 4827dad5..a3aa6610 100644\n'
'--- a/igibson/objects/multi_object_wrappers.py\n'
'+++ b/igibson/objects/multi_object_wrappers.py\n'
'@@ -129,7 +129,7 @@ class ObjectGrouper(BaseObject):\n'
' \n'
' # These attributes are used during object import '
'and should return\n'
' # the concatenation results of all objects in '
'self.objects\n'
'- if item in ["visual_mesh_to_material", '
'"link_name_to_vm", "body_ids", "is_fixed"]:\n'
'+ if item in ["visual_mesh_to_material", '
'"link_name_to_vm", "body_ids", "is_fixed", '
'"renderer_instances"]:\n'
' return '
'list(itertools.chain.from_iterable(attrs))\n'
' \n'
" # Otherwise, check that it's the same for everyone "
'and then just return the value.\n'
'@@ -188,7 +188,6 @@ class ObjectGrouper(BaseObject):\n'
' if issubclass(state_type, '
'AbsoluteObjectState):\n'
' '
'state_instance.load(dump[get_state_name(state_type)])\n'
' \n'
'-\n'
' class ObjectMultiplexer(BaseObject):\n'
' """A multi-object wrapper that acts as a proxy for the '
'selected one between the set of objects it contains."""\n'
' \n'
'diff --git '
'a/igibson/render/mesh_renderer/mesh_renderer_vr.py '
'b/igibson/render/mesh_renderer/mesh_renderer_vr.py\n'
'index a768ad31..f766f0ac 100644\n'
'--- a/igibson/render/mesh_renderer/mesh_renderer_vr.py\n'
'+++ b/igibson/render/mesh_renderer/mesh_renderer_vr.py\n'
'@@ -138,7 +138,7 @@ class VrSettings(object):\n'
' self.use_tracked_body = '
'shared_settings["use_tracked_body"]\n'
' self.torso_tracker_serial = '
'shared_settings["torso_tracker_serial"]\n'
' # Both body-related values need to be set in order '
'to use the torso-tracked body\n'
'- self.using_tracked_body = self.use_tracked_body and '
'self.torso_tracker_serial\n'
'+ self.using_tracked_body = self.use_tracked_body and '
'bool(self.torso_tracker_serial)\n'
' if self.torso_tracker_serial == "":\n'
' self.torso_tracker_serial = None\n'
' \n'
'diff --git a/igibson/robots/behavior_robot.py '
'b/igibson/robots/behavior_robot.py\n'
'index bc984df9..be99fa9f 100644\n'
'--- a/igibson/robots/behavior_robot.py\n'
'+++ b/igibson/robots/behavior_robot.py\n'
'@@ -195,6 +195,8 @@ class BehaviorRobot(ManipulationRobot, '
'LocomotionRobot, ActiveCameraRobot):\n'
' \n'
' # TODO: Remove hacky fix - constructor/config '
'should contain this data.\n'
' if self.simulator.mode == SimulatorMode.VR:\n'
'+ print("robot:", self.use_tracked_body)\n'
'+ print("sim:", '
'self.simulator.vr_settings.using_tracked_body)\n'
' assert (\n'
' self.use_tracked_body == '
'self.simulator.vr_settings.using_tracked_body\n'
' ), "Robot and VR config do not match in terms '
'of whether to use tracked body. Please update either '
'config."\n'
'diff --git a/igibson/vr_config.yaml '
'b/igibson/vr_config.yaml\n'
'index b6051117..13b7007b 100644\n'
'--- a/igibson/vr_config.yaml\n'
'+++ b/igibson/vr_config.yaml\n'
'@@ -36,7 +36,7 @@ shared_settings:\n'
' # Serial number of VR torso tracker - this can be found '
'by connecting/pairing the tracker,\n'
' # then going into Steam VR settings -> controllers -> '
'manage vive trackers\n'
' # Note: replace this with your own tracker serial number '
'or leave blank to not use one\n'
'- torso_tracker_serial: "LHR-DF82C682"\n'
'+ torso_tracker_serial: "LHR-BDE12AB6"\n'
' # Settings that are specific to different VR devices (eg. '
'eye tracking, button mapping)\n'
' device_settings:\n'
' HTC_VIVE_PRO_EYE:',
'code_diff_staged': '',
'commit_hash': 'bc2520de66025c486cff11e30d881b0f29cd1384'}
Current git info:
{ 'branch_name': 'master',
'code_diff': 'diff --git a/igibson/tasks/behavior_task.py '
'b/igibson/tasks/behavior_task.py\n'
'index 3c1b9868..d38b763f 100644\n'
'--- a/igibson/tasks/behavior_task.py\n'
'+++ b/igibson/tasks/behavior_task.py\n'
'@@ -127,6 +127,12 @@ class BehaviorTask(BaseTask):\n'
' self.conds, self.backend, self.object_scope, '
'self.goal_conditions\n'
' )\n'
' \n'
"+ # print('[DEBUG]][self.obj_scope]', "
'self.object_scope)\n'
'+ # print(self.initial_conditions)\n'
'+ # print(self.goal_conditions)\n'
"+ # print('[DEBUG][self.ground_goal_state_options]', "
'self.ground_goal_state_options)\n'
'+ # exit()\n'
'+\n'
' # Demo attributes\n'
' self.instruction_order = '
'np.arange(len(self.conds.parsed_goal_conditions))\n'
' np.random.shuffle(self.instruction_order)',
'code_diff_staged': '',
'commit_hash': '58ac14cf62949008b6851a5a95602cd5084edffd'}
Creating environment and resetting it
Command: python -m igibson.examples.behavior.behavior_demo_replay
Results: Error, no module named 'bddl.activity_base'. / Error, no module named 'igibson.task'
Other issues
Missing action primitive baselines. Behavior cloning baseline training okay, testing failed. BDDL branch is confusing, document not finished, cannot run the code follow the default readme.md (I guess it's an older version?). BDDL behavior-refactored branch pytest failed.
Summary
In short, impossible to replay at all. Can anyone write a detailed instructions on that? If you want the benchmark get attention, at least people can reproduce the basic things easily, right? Many instructions are not up-to-date and show many inconsistencies. Can anyone tells me which one is the correct one to reproduce the vr demo? This https://github.com/StanfordVL/behavior/blob/main/docs/vr_demos.md does not work.
The text was updated successfully, but these errors were encountered:
There are basically two methods I tried to replay (as of June 14).
Method 1
BEHAVIOR: master branch.
iGibson: master branch.
BDDL: master branch.
Dataset: version 2.0.6.
Command:
python -m igibson.examples.learning.demo_replaying_examples
Results: I can see the demo, but the the motion of the robot is different from ground truth environment. I believe I faced a similar (or even worse) issue as StanfordVL/iGibson#161 (comment). I know it may because I am not running demos on a Windows machine, but it also shows the below messages and I don't know what's going on. I simply cannot find the
vr-demo-collection
branch at all. Also, the version 2.0.6 is suspicious, I don't know which version is correct for reproducing the demo.Method 2
BEHAVIOR: master branch.
iGibson: behavior-replay branch.
BDDL: master branch / behavior-refactored-verified-problems.
Dataset: version 2.0.6.
Command:
python -m igibson.examples.behavior.behavior_demo_replay
Results: Error, no module named 'bddl.activity_base'. / Error, no module named 'igibson.task'
Other issues
Missing action primitive baselines. Behavior cloning baseline training okay, testing failed. BDDL branch is confusing, document not finished, cannot run the code follow the default readme.md (I guess it's an older version?). BDDL behavior-refactored branch
pytest
failed.Summary
In short, impossible to replay at all. Can anyone write a detailed instructions on that? If you want the benchmark get attention, at least people can reproduce the basic things easily, right? Many instructions are not up-to-date and show many inconsistencies. Can anyone tells me which one is the correct one to reproduce the vr demo? This https://github.com/StanfordVL/behavior/blob/main/docs/vr_demos.md does not work.
The text was updated successfully, but these errors were encountered: