WIP using Reachy for mask detection and distribution

Hello there

We are working on a new application with Reachy so he can detect when people wear mask and distribute them.

Here is a teaser:

The most difficult part is definitely the distribution but we have managed to have an efficient and robust grasping of each mask.

We’ve 3d printed a new left hand to stock masks and added rubber material on the right thumb. Then, like human do, Reachy slides his thumb to pull one mask and just close the gripper to grab it. It works surprising well!

And you know what? The exact same code and hardware work for flyer distribution as well!

We will soon publish the app and hardware files.

Special thank to @Gaelle, @Simon and @Augustin who worked hard to make it real.


Any progress on this release or even a WIP version? We’d like to possibly pursue something similar.

Hello @DanAtCircuitLaunch,

In fact we have put aside this project for a while… We will take a little time this month to release the first version so you can have a look!

1 Like

Can you guys give him thermal vision… Not to be confused with laser eyes but thermal vision like… Predator
Raspberry Pi: Turn the popular single-board computer into an inexpensive thermal camera - NotebookCheck.net News

We have published a version of the mask distribution on github so you can have a look!
The repository is not very clean, as we are busy working on the new version of Reachy, but you can have an idea of how it has been made :wink:


Thanks. We are excited to see it. Unfortunatley it looks like your repo is still private. I am getting a 404 error from Github with that link.

That should be ok now :wink:

1 Like

Can you include some test data images in your Github for validation of your model?
I am trying out your model but not sure where to find test data. Please advise. Thanks.

Hi Charles,
The training data of the game piece classification model can be found on the tutorial repository.
For the valid/invalid board classifier, we don’t have the training data but it’s just images of the board where the robot can play or not. We consider that the board is invalid in situations such as when someone’s hand is above the board, when there are multiple pieces in the same board case, when there are odd objects on the board, etc…

Simon: I mean the validation test images for the mask detection model.

My bad, I thought you were asking this in the tic-tac-toe thread, everything I used for the mask classifier can be found here.
I don’t know which images were used for the test set because the script imprinting_learning.py from Google randomly split the image set into training and test.

I used 60 images from this Kaggle dataset.

To perform the mask detection I use the face detection model from Google Coral and then I apply the mask classification and on each detected face to determine whether or not the person has a mask.

Thanks for the clarification, Simon.

I was confused because the Github repo referenced above (GitHub - pollen-robotics/reachy-masks) does not appear to have any mask detection code. Instead, it uses a faceNet model to track people.

Using similar code as TicTacToe, I am able to use the classifier model to detect the mask after cropping out the image box from face detecter model. However, the accuracy is not very good with a success probability around 0.5. Thus the result goes from no_mask to ok_mask from one camera image to another… Did you experience a similar problem?

I also observed that the face detection model does not work as well detecting faces when the face has a mask on…