Hello everyone,
I’m currently working on a research project using Reachy2, where I’ve built a real-time AR/VR digital twin that mirrors the robot’s movements. This part is already working: the digital twin reflects Reachy’s joint and base motion correctly.
Now, I want to move beyond mirroring and implement prediction — meaning the AR digital twin should visualize what the robot is about to do slightly before the physical motion happens (e.g., a short-horizon predictive display).
To do this, I need access not just to the robot’s current state, but ideally to the teleoperation command stream or target setpoints that the operator is sending (e.g., base velocity commands, joint targets, trajectories, etc.).
My main questions:
-
When Reachy2 is teleoperated (e.g., via VR/joystick/UI), where do the actual control commands exist?
-
Are they published as ROS2 topics (e.g.,
/cmd_vel, joint trajectories, etc.)? -
Or are they only internal to the Reachy SDK / control stack?
-
-
Is there a way to access:
-
Desired joint positions/velocities (targets)?
-
Base motion commands (before execution)?
-
Any kind of “intent” signal rather than a measured state?
-
-
If no explicit command topics exist, is there a way to access:
-
Controller setpoints?
-
Target states before they are applied?
-
-
Has anyone implemented or explored a predictive display/intent visualization with Reachy before?
My goal is to use this information to visualize the robot’s future pose in AR (e.g., 200–500 ms ahead), not just its current pose.
Any guidance on where to tap into the teleoperation pipeline, relevant topics, SDK hooks, or best practices would be hugely appreciated.
Thanks a lot!