Running inference on the model in Realtime #522
GandharvMahajan
started this conversation in
General
Replies: 2 comments 1 reply
-
Hi @GandharvMahajan - running a jitted version of the inference function on its own should be ludicrously fast. If you can make a repro in a colab, we can help you investigate. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Here's the inference code if someone is interested.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I followed the quadruped joystick controller collab tutorial and trained a PPO policy to walk. I have made some changes at the end for taking user input in realtime (up, down, left and right). I was wondering if it’s possible to run the trained model for realtime inference. Currently I noticed that stepping the jitted model for even 1000 steps take about 20 seconds. Can someone point me to a solution?
thanks.
Beta Was this translation helpful? Give feedback.
All reactions