Reachy Grasping

Hi, I was trying to make Reachy perform a Grasping action by using the ready-made simulator shared by you, Pollen Robotics.

In particular, my goal is to:

  1. Take into account the last frame acquired from Reachy’s right eye;
  2. Perform an object detection action in order to detect the cup that is on the table in front of Reachy;
  3. Carry out an object segmentation action in order to find the segmentation mask of the detected object (the cup).
  4. Find the position of this object (the cup) in space.
  5. Have Reachy take the coordinates in space of the object calculated in the previous point and hook the object using its clamps and then subsequently release it.

For the first 4 points I didn’t have any problems, the code executes correctly and does what I want.

The problems start at point 5.

To try to carry out this grasping action I relied on the following file shared on GitHub: pollen-vision/scripts/grasp_demo.py at develop · pollen-robotics/pollen-vision · GitHub,
which should have precisely this purpose.

Could you tell me what I should fix in my code or share with me a working Jupiter notebook that does more or less the same thing in order to make me understand where I’m wrong and then fix everything and make it work correctly and have this grasping action performed by Reachy?

I leave my code attached to the following link: https://drive.google.com/file/d/1paQZ5ROyZMIj7YwXRKzhKEmlh4AoRqIX/view?usp=drive_link

Thanks in advance.

Hi @stefano_caramagno !

I could not look at your code on google drive (I requested access), but in order to compute the position in space of the object, you need a depthmap. As is, you can’t get a depthmap from the Reachy in Unity package through our SDK.
Anyway, I don’t think you will be able to grasp anything with the collision detection in Unity, you can mainly use it as a mean to visualize the motion of the robot.

Hi @apirrone, sorry, I didn’t have public access set up on Google Drive, however I accepted your request.
If you can take a look at my code you will do me a huge favor.(grasping.ipynb - Google Drive)

Therefore in the simulator at the current state of development it is possible to carry out a grasping action on an object only by already knowing its coordinates a priori, but it is not possible to calculate the coordinates of an object and then carry out a grasping action on it it?

Not even the microphone and speaker currently work with the simulator?

Are all these activities listed above with the physical robot possible in reality?

Thanks in advance

Hi @stefano_caramagno

The grasping demo script you took as an example (pollen-vision/scripts/grasp_demo.py at develop · pollen-robotics/pollen-vision · GitHub) requires to have access to the depth information in order to estimate the 3D position of the object in space. This information is not available through the sdk of reachy 1, so you won’t be able to get the 3D position of the object.

Moreover, as I said before, even if you were able to estimate the 3D position of the object, you could only execute the movement in unity in order to see what it does, but I don’t think you would be able to really grasp it with this simulation, as the collision engine is not tuned for this.

The microphone and speaker will not work in the simulation.

If you have access to a real physical reachy and a depth camera however, you sould be able to reproduce the demo.