APC Australia

Keep your robot busy

What you do with your robot is limited only by your imaginatio­n and attention span. But for now, here are a few suggestion­s to inspire you.

-

“If your robot has a camera attached, then you can stream the video for a delightful first-person video driving experience.”

Having constructe­d our Diddy, we wanted to know more about what it could do. Not content with making glorious hardware, PiBorg also provides some great code examples to get you started. These will need some tweaking to work on other hardware, but should give you some idea of how to talk to the hardware in Python.

Robot kits will provide scripts to deal with all the low-level communicat­ion. In our case this is all done through the ThunderBor­g.py file (see www.piborg.org/blog/build/thunder borg-build/thunderbor­g-examples). This handles all the raw I2C coding, so you don’t need to worry about that, and provides much more human-friendly functions such as SetMotor1() , which sets the speed of the left-hand wheels.

WEB CONTROL

Assuming your Pi is connected to a wireless network, then one slightly roundabout way to control it is to have it run a small webserver with an HTML form to control it. If your robot has a camera attached, then you can stream the video to this webpage, for a delightful first-person video driving experience.

Creating a streaming video processor in Python is less complicate­d than you’d think, but more complicate­d than we’d like to get into in this summary. So study the DiddyBorg web UI example at www.piborg.org/blog/ build/diddyborg-v2-build/diddyborg-v2examples-web-ui to see how the magic happens. If you’re are lucky enough to own a DiddyBorg, then copy that script to it,

GAMEPAD CONTROL

Controllin­g your robot with a gamepad is a little easier to get along with. However, as we discovered, they might need a little persuasion to work when a GUI isn’t running (such as when your Pi isn’t connected to a monitor). In theory, you could set this up beforehand, or by removing the SD card from the robot and booting it in another Pi — the settings should be remembered. If not, we can set this up by SSHing into our robot.

There are two challenges to overcome: the actual Bluetooth pairing and the subsequent setting up of the device nodes. The latter is handled by the joystick package (or evdev which it depends on) and the former by the bluetoothc­tl command (this will be installed as standard). After installing the joystick packag, run

bluetoothc­tl . This will start a console where we can scan, pair and connect our controller. First, put the device in pairing mode and then initiate a scan with scan on . You should see a list of all nearby Bluetooth devices and their MAC addresses. Hopefully your controller is in there, in which case copy the address. Deactivate the scan with

scan off . Then pair with pair <MAC address> , connect with connect <MAC address> and take your leave with exit .

Now run evtest , which will greet you with a list of detected input devices. Select your desired controller and mash the buttons. You should see a different cryptic response for each button. The DiddyBorg includes an example script for joypad control, which uses the PyGame libraries to listen for the relevant button events.

IMAGE RECOGNITIO­N

Tensorflow is all thanks to the work of Sam Abrahams, who’s provided precompile­d wheel files for Python 2.7 and 3.4. This is good news if you’re running the second-to-last (Jessie) version of Raspbian, since that includes Python 3.4. If you’re running the latest version (Stretch, which uses Python 3.5), however, then you’ll need to use the Python 2.7.

Having two different major versions of Python like this is fine (Raspbian ships them both as standard), but one cannot have 3.4 and 3.6 installed concurrent­ly, and the 3.4 wheel won’t work with Python 3.6. Before we begin, be advised that Tensorflow models’ repository is large, and more than once, we ran out of space using an 8GB SD card. This can be worked around, by removing larger packages such as LibreOffic­e and Wolfram Alpha, but using a 16GB card

is recommende­d. The following will set up everything you need:

$ wget https://github.com/ samjabraha­ms/tensorflow-onraspberr­y-pi/releases/ download/v1.1.0/tensorflow­1.1.0-cp27-none-linux_ armv7l. whl

$ sudo apt install pythonpip python-dev python-pil python-matplotlib python-lxml

$ sudo pip install tensorflow-1.1.0-cp27-nonelinux_ armv7l.whl

$ git clone https://github. com/tensorflow/models.git

This last stage will start a roughly 1GB download, so beware. If you run out of space, the process can be

resumed, once you’ve cleared some clutter, by running git checkout

-f HEAD from the models/ directory. Once it completes successful­ly, test it:

$ cd models/tutorials/image/ imagenet

$ python2 classify_image.py

This should identify the bundled panda image (which was extracted to /tmp/imagenet/cropped_ panda.jpg). The script can also take an --image

file parameter to identify usersuppli­ed images. So we could adapt things to take a photo and then attempt to identify it. Since the whole process takes about 10 seconds on a Pi 3 (although this could be sped up by keeping the program running in a loop, using C++ or with a more slimline Tensorflow model), we don’t really have any hope of classifyin­g things in real time. Furthermor­e, it’s likely that a photo taken at ground level with the Pi Cam will be tricky for the script to identify. But that’s OK, it just adds to the fun. All we need to do is tweak classify_image.py a little. Copy this file to classify_ photo.py, then edit the new file. You’ll need to import the picamera module early on, then in the main(_)

function, replace the line that begins with “image =” with something like: cam = picamera.PiCamera() cam.capture(‘/tmp/picam.jpg’)

And finally, change the run_

inference_ on_image() call to run on our freshly captured picam.jpg. If you’re feeling adventurou­s, why not bind the new script to some button press event? The controller script for the DiddyBorg could easily be adapted to do this.

BALL FOLLOWING

OpenCV is a powerful computer vision framework that includes Python bindings. It’s actually used to draw the image in the earlier Web UI example, but we can use it for more advanced purposes. It’s capable of detecting objects within an image, which means we can make our robot drive towards them. If those objects move, then it will follow them. For this to work, said objects need to be fairly distinct, such as a brightly coloured ball. You’ll find just such an example at www.piborg.org/blog/diddyborg-v2examples-ball-following.

 ??  ?? The Monsterbor­g is capable of great speed and not afraid of going off-road!
The Monsterbor­g is capable of great speed and not afraid of going off-road!
 ??  ?? Tensorflow figured that this was a rabbit, but then thought Jonni was a sweet potato, Dan a bow tie and Paul a punch bag...
Tensorflow figured that this was a rabbit, but then thought Jonni was a sweet potato, Dan a bow tie and Paul a punch bag...
 ??  ?? We need to convert the image to HSV so that OpenCV can better detect any red ball-like objects to follow.
We need to convert the image to HSV so that OpenCV can better detect any red ball-like objects to follow.

Newspapers in English

Newspapers from Australia