How to make your Reachy plays tic-tac-toe

Make your own Reachy play tic-tac-toe (WIP!)

Required: Reachy right arm, Head

This tutorial will guide you through the building of a new environment: making your Reachy play autonomously a real game of tic-tac-toe against a human player. You can see what it will look like once finished in the video below:

Pretty cool, right? But let’s dive in the project as we have quite some fields to cover. We’ll need to:

  • build the hardware setup

  • record the motions for grasping a pawn and placing it to one of the case

  • use vision to analyse the board

  • train some AI to know how to play tic tac toe

  • and finally integrate everything in a game loop so we can try to beat Reachy

The whole code for this tutorial will be built upon Reachy Python’s API. We assume that you’re already familiar with its basic usage and especially that you know how make it move.

TODO: ICON TRAJ

TODO: ICON VISION

TODO: ICON IA

Build the hardware setup

First thing first, we’ll start by building the hardware setup. To create this environment you will need:

  • a wood plank for the board (we used a 60x60cm)

  • 5 pawns for the robot: the cylinders (radius 5cm height 7cm)

  • 5 pawns for the human player: the cubes (5x5x5cm)

First of all, let’s draw the lines of the tic-tac-toe’s grid. To do so, we need to draw lines with black tape in order to make 9 squares of 12.5 cm.

Now, Reachy has to be set on the board. To do that, two solutions are possible:

  • Fix the metallic structure on the board with Clamps. It works on short term, but is not very reliable.
  • Make four holes of 8.2mm diameter on the board, in order to clamp the metallic structure with screws and nuts.

When it’s done, we can clamp the metallic triangle on Reachy’s structure by putting a screw through it and through the board, and putting a ring and a nut on the other side of the board.

To allow Reachy to find its pawns, we must set small stickers on the board on which Reachy’s cylinders will be put. Theses stickers can be set freely on the right side of Reachy.

Grab and place pawns on the board

To simplify fixed catching position

we can hardcode it by defining key points (joint or cartesian) → complex and boring

5 positions to catch and 9 positions to place

45 trajectories → all of them can be recorded

or we record half-trajectories and reconstruct them

be careful if there’s a pawn in the way

apply the smooth to make the movements more fluid

use a curve editor to make them nicer

we can imagine a lot of improvements (IK, spline, position of various paws)

two versions of a movement depending on whether the path is clear or not

Board analysis using vision

Ok, so we built our setup and our Reachy is able to play pawns on the board. As we want our robot to play autonomously, it needs to be able to detect the game state, meaning look at the board, analyse it and detect where its pawns are and where the human’s one are located.

This is a prerequisite before actually reacting to what the human player is doing and playing autonomously when Reachy’s turn comes.

There are a lot of ways we could have chosen to realise this. For instance, one could have used RFID on the board to detect what pawns are. We decided to go with vision for two main reasons.

First, we are here designing a game between a human player and Reachy. The interaction between them is at the heart, so it’s important that Reachy “shows” to the player what is doing and what are its intents. For example, the robot needs to look at the board, stop for a while like if it was thinking and then actually plays. So as our robot will look at the board anyway, why not actually really looking and use our camera to infer what’s happening.

Second, our setup is rather simple from a vision perspective. We have a known background (the board) and two different objects to recognise (Reachy’s pawn and the human’s pawn). Today, we can use some really effective tools for solving this task as we will show below.

Board analysis: a classification problem

Extract thumbnails for each case

As stated above, our task is quite constrained and we can easily make it even more constrained. We can fix the head position when we look at the board so we always see it from the exact same point of view.

Thus, we can identify the exact area of the image corresponding to each case of the board. Once, we have done this we can extract 9 smaller images: one per case. Here’s an example of what the process will look like

The code for extracting the images is shown here: → Lien notebook ?

Cube, Cylinder or nothing?

Now, we need to determine for each image whether there is a cylinder (robot’s pawn), a cube (human’s pawn) or nothing. For simplification purpose, we assume here that there will only be one pawn or zero on each case. This may be wrong, especially if one of the players decides to do something unusual.

In this context, our vision task becomes a classification task where for an image we need to decide if it belongs to either one of three possibilities: cube, cylinder or nothing.

Collect and label data

Thanks to recent development in deep learning, this image classification task became rather straightforward (at least when the category are simple). The only thing needed is labeled data to train our classifier. Here, this means to gather 3 sets of images in separate folder for each of our classes.

First, let’s collect images of the board. Keep in mind that for robustness it’s better to collect images in different conditions: lighting, slightly different positions and orientations. Here’s a few examples of different boards configurations:

A good practice is actually to incrementally build a database, 10 images the morning, 10 other 2 hours later when there is a cloud in front of the window, etc. Having different people giving different examples may also be a good idea!

We recommend to have 3 cylinders, 3 cubes and 3 empty case on the board each time you take a picture. Thus, you will have the same amount of images for each category. As our task is actually quite simple, collecting 10 images of the board should already give you satisfying results. The more images you had to the process, the more robust the classification will be.

You can find the code for recording images of the board here: TODO: link notebook

Thanks to the box you defined previously, the notebook leaves you with a folder full of images of a single case.

Now, it’s time for the not so entertaining part of the work: labelling data. Here, it simply means looking at the images you obtained and for each one move it to the corresponding folder (cube, cylinder, or nothing) depending on what you see on the image.

Train a classifier

Deep learning especially shines when it comes to classifying images. That’s the approach we are going to follow here. We will use Tensorflow to build our classifier and train it.

Note: The model we will use is pretty computation intensive. So, to get real time detection and to speed up training we recommend using a machine with a GPU. For more details on how to setup your machine and install the needed software, we recommend you to read the install section of Tensorflow website. The Reachy from the video is actually running on a Jetson TX2 board.

Actually, we will use a pre-trained network (on millions of images) and we will slightly adapt it (fine-tuned it) to our task.

The code can be found here: TODO: notebook

If you want to go further and better understand what we did, you can find a lot of good introductions on how image classification works with deep learning:

TODO: lien deep learning + classification

Real time evaluation of our model

It’s time to test our model in real conditions. To do that, the notebook below will guide you to this simple process:

  • Connect to Reachy and make it look the board

  • Load our trained model and start predicting what’s inside each case of the board (actually runs 9 classifications, one per case)

  • Display what the model predicted directly on the image

  • Start placing pawns on the board and move them and see how the model reacts

TODO: lien notebook

If your model performs poorly, it most likely means that you need to use more (and more diverse) training data.

You may notice that when you move your hand in front of the board, the model predictions become less robust. It’s normal as we did not show any hand during training. Actually, we will tackle this issue in the next part.

Is the board valid?

Something that we did not take into account so far, is whether the board can be analysed or not. During a game, many configurations may occur where the board can not be robustly analysed. It could be that there is a hand or a arm in front of the board hiding some case. It could also be that the human is still holding the piece, still hesitating where to put it. A few examples of what we mean is show below:

TODO: images invalid

It’s important that our system is able to recognise these configurations and wait for a better one. We will use the exact same approach than before: we will train another classifier to recognise basically two categories: valid board and invalid board.

We will follow the same process: gather board images both valid and invalid, labelled them and train a

new classifier. The whole process is done in TODO: notebook.

Putting everything together

We can run once again our live detection but with our valid/invalid detection run first so we only run the board analysis when we are sure our board is okay. You can try again with this notebook.

Time to play

Reachy needs some training first

Game loop

First fully functional implementation

Make it fun

Add non verbal communication

Victory, Defeat and other celebrations

1 Like

Thanks for sharing your great knowledge, i have an issue Tic Tac Toe Multiplayer. when i hit the centre like zero then the corner spaces automatically fill out… have you better solution.

Hi @tictactoe,

I’m not sure I understand your question? If you are wondering how we trained our AI for playing tic-tac-toe, we used Q-learning and self-play. You will find tons of resources online on the details of how to do that.

Hi, Pierre. Can you update the notebooks in reachy_tictactoe package on github? I am working with a student group to demo Reachy robot at Circuit Launch in California. I found the notebooks incomplete because some data are missing. For example, ‘board_images/*.jpg’ in Collect_training_images.ipynb are not included. Thus it is difficult for us to run the notebooks to test some concepts. Could you post your detailed documentations and training data so we could replicate your work on our Reachy?

Re my previous message, I understand we are supposed to collect our own images in the notebook. However, it would help to see what your collected images look like to make sure our images have sufficient quality for subsequent processing.

Hi,
The notebooks “Collect_training_images.ipynb” and “Check_boxes.ipynb” are part of a Notion page we made to explain how to setup the TicTacToe application, you can find it here.
This should be a bit clearer with it. If you have other questions don’t hesitate to ask :slight_smile:

1 Like

RE ttt-boxes.tflite model error

I am testing your classifier model ttt-boxes.tflite in the reachy-tictactoe package but I encountered this strange error: RuntimeError: Model provided has model identifier 'ion ', should be 'TFL3’
Here is the complete output trace:

(base) test@z420:~$ /home/test/miniconda3/bin/python /home/test/reachy-tictactoe/reachy_tictactoe/testCV1.py
Traceback (most recent call last):
File “/home/test/reachy-tictactoe/reachy_tictactoe/testCV1.py”, line 17, in
boxes_classifier = ClassificationEngine(os.path.join(model_path, ‘ttt-boxes.tflite’))
File “/home/test/miniconda3/lib/python3.7/site-packages/edgetpu/classification/engine.py”, line 48, in init
super().init(model_path)
File “/home/test/miniconda3/lib/python3.7/site-packages/edgetpu/basic/basic_engine.py”, line 92, in init
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Model provided has model identifier 'ion ', should be ‘TFL3’

Any suggestion? Please help.

BTW, I notice the TTT demo still uses the deprecated edgetpu package
https://coral.ai/docs/edgetpu/api-intro/

Can you update the demo with the current PyCoral package?

Thanks.

Charles

Perhaps the demo uses a different version of edgetpu package? Here is what I see via pip3 show:

Name: edgetpu
Version: 2.14.1
Summary: Edge TPU Python API
Home-page: https://coral.googlesource.com/edgetpu
Author: Coral
Author-email: coral-support@google.com
License: Apache 2
Location: /home/test/miniconda3/lib/python3.7/site-packages
Requires: numpy, Pillow
Required-by:

Hi Charles,

Usually this error is because you didn’t recover the model ‘ttt-boxes.tflite’ but the link to git lfs.
Did you do this?

cd ~/dev/reachy-tictactoe
git lfs pull
1 Like

Thank you, Simon. You are right. The model file ttt-boxes.tflite that I cloned from github was just a place-holder. I did not know about Git LFS package. After I installed git-lfs, I got the actual model file 4.5MB size and that error message went away.

RE edgetpu install on Pi ???

How did you install edgetpu package on the Pi controller on Reachy?

I followed the instructions here:

coral.ai/software/#edgetpu-python-api

and downloaded the generic Linux wheel package:

edgetpu-2.14.1-py3-none-any.whl

However, after I did 'pip3 wheel " to install without error messages, the package is still not accessible to Python. I got the module not found error:

ModuleNotFoundError: No module named ‘edgetpu’

Strangely, I was able to install edgetpu on my Ubuntun 18.04. computer with the same method. However, I got the error message when I tried to load the model ttt-boxes.tflite:

File “/home/test/miniconda3/lib/python3.7/site-packages/edgetpu/basic/basic_engine.py”, line 92, in init
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Internal: Unsupported data type in custom op handler: -1473687232Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensors.

Did I miss something?
Please advise. Thank you.

Charles

Hi Charles,
To install the edgetpu library on RPi, check out section ‘Setup Coral toolkit’ from this page of our documentation.

Thank you, Simon. That helps. We have the demo somewhat working now. However, our arm is flexing and shaking when it moves. This may be a quality issue with our 3D printed parts. Here is our test video:

2 Likes

I think we also removed or haven’t implemented the smoothing trajectories either from what I saw. We should play with that

We started running tic-tac-toe game_launcher.py and saw lots of JPG corrupt image messages. Is it going to cause a problem with the game and if so how to fix it ?

Hi @annag5555 ,
The problem of JPG corrupt image is because of the Logitech’s driver but it should not cause any issue in game, the images are still captured correctly (it is just annoying to see in the terminal unfortunately).
We are using better cameras in the new version of Reachy which have higher quality and not this messages of corrupted data!

1 Like

Bonjour,

Je n’ai pas réussi à trouver de la documentation qui explique la position de départ de l’échiquier (un cylindre au milieu ? rien ?) ?
Pareil pour la position de base des pions, j’ai cru comprendre qu’ils se trouver à droite sur le coter de l’echiquier ?

Bonjour @Thea,
Les positions des pions sont détaillées dans la page Notion que nous avons faite pour l’application. Le premier cylindre attrapé par Reachy sera le 1 puis 2 etc, jusqu’au 5.

@Simon We are trying to use tic-tact-toe with the new Reachy 2021 Python SDK :
I guess that all the *.npz files in the move/ directory must be rebuild to take into account the new names of the parts of Reachy’s arms ?
The way to do is is to record each needed movement and np.savez the trajectories as *.npz files ?

Hi @Jean_Luc,
Indeed the name of each Reachy’s parts should be changed with the new name, I updated the npz files, you can find them here. They should work with Reachy 2021. Make sure to check them alone before using them in the TicTacToe app.

2 Likes