We are working on a new application with Reachy so he can detect when people wear mask and distribute them.
Here is a teaser:
The most difficult part is definitely the distribution but we have managed to have an efficient and robust grasping of each mask.
We’ve 3d printed a new left hand to stock masks and added rubber material on the right thumb. Then, like human do, Reachy slides his thumb to pull one mask and just close the gripper to grab it. It works surprising well!
And you know what? The exact same code and hardware work for flyer distribution as well!
We have published a version of the mask distribution on github so you can have a look!
The repository is not very clean, as we are busy working on the new version of Reachy, but you can have an idea of how it has been made
Can you include some test data images in your Github for validation of your model?
I am trying out your model but not sure where to find test data. Please advise. Thanks.
Charles
Hi Charles,
The training data of the game piece classification model can be found on the tutorial repository.
For the valid/invalid board classifier, we don’t have the training data but it’s just images of the board where the robot can play or not. We consider that the board is invalid in situations such as when someone’s hand is above the board, when there are multiple pieces in the same board case, when there are odd objects on the board, etc…
My bad, I thought you were asking this in the tic-tac-toe thread, everything I used for the mask classifier can be found here.
I don’t know which images were used for the test set because the script imprinting_learning.py from Google randomly split the image set into training and test.
To perform the mask detection I use the face detection model from Google Coral and then I apply the mask classification and on each detected face to determine whether or not the person has a mask.
I was confused because the Github repo referenced above (GitHub - pollen-robotics/reachy-masks) does not appear to have any mask detection code. Instead, it uses a faceNet model to track people.
Using similar code as TicTacToe, I am able to use the classifier model to detect the mask after cropping out the image box from face detecter model. However, the accuracy is not very good with a success probability around 0.5. Thus the result goes from no_mask to ok_mask from one camera image to another… Did you experience a similar problem?