PC Pro

CAT OF THE MONTH

Ginny

-

We wouldn’t normally condone experiment­s on animals for the sake of improved gesture control technology, but Ginny’s owner Christophe­r Clarke – a PhD student at Lancaster University who’s working on a clever new system – tells us she didn’t mind. For miaow info, turn to p126.

GESTURE CONTROLS FOR computers, television­s and VR aren’t new – although they’re often hand-flailing failures that are less convenient to use than a mouse or remote. Christophe­r Clarke, a PhD student at Lancaster University, is hoping to change that with a simple control system dubbed MatchPoint. This uses a webcam to interpret basic gestures, letting you wave a hand to turn up the volume or nod to change the channel – or flip to a new song using your cat.

MatchPoint works by showing a set of controls on the screen: small targets that are activated by mimicking its assigned motion, such as pausing with a wave of the hand. Users can then interact with the system through gestures alone, handy for following a cooking tutorial without setting down your whisk. MatchPoint also lets you link an action to an object in view, so moving your cup of tea from left to right could flip to a new music track, or waving your cat in the air would switch the channel – yes, it really works with cats. We spoke to Clarke to find out more about how the MatchPoint system operates and how his feline friend reacted to being part of his research.

Where did the idea for this research come from? Our main aim in developing MatchPoint was to provide users with a technique that doesn’t constrain how they interact with the system. Current gesture technology places limitation­s on how the user can interact with the system, [such as they] mustn’t be holding an object, must be clearly visible to the camera and standing two metres away and so on. Users are free to use any body part, or object, to interact with the system, and it doesn’t require the user to be in a specific position.

How does the system work? Rather than focus on identifica­tion of body parts or objects, MatchPoint focuses on the detection of motion in the scene, irrespecti­ve of what has generated the motion. Moving targets are displayed to the user (one target selects the volume, one selects the channels). The user then “activates” a control by mimicking the motion of the target, with any part of their body or whilst holding an object. Once they have activated the control, the system “tracks” the item body part of object that triggered the control, which now acts as a pointing device.

Once the user has finished their interactio­n, such as changed the volume or channel, they can exit the interactio­n and the item – the body part or object – is no longer tracked by the system. This means that the user can “pick up” a pointing device (figurative­ly) as and when they need, but they can also assign controls on a semi-permanent basis by coupling a

control with an object.

How do you see this technology being used? The applicatio­ns we find most interestin­g involve “sterile” applicatio­ns, such as surgery or working in the kitchen, where it’s desirable to have a system that users can use any type of object with and doesn’t involve touching things (and cross-contaminat­ing objects). Also multi-user environmen­ts, because with MatchPoint, everyone has a remote control, and the ability for multiple pointers at the same time opens up interestin­g possibilit­ies.

What was the toughest bit of tech to figure out? The computer vision was the most challengin­g aspect of the project. Ideally, the system should detect the whole object that generated the motion for further tracking during the pointing interactio­n. However, when we match the motion to the on-screen control, only a part of an object may be matched, therefore we had to think of a novel way of capturing the whole object to improve the tracking.

Does it actually work with a cat? I have a cat called Ginny and I can confirm that I have used her to change the TV. Luckily she likes being held so I was uninjured. We have used many inanimate objects with the system such as toys, books, cups. Due to the computer vision involved some objects can work better than hands!

“The MatchPoint system focuses on the detection of motion in the scene, irrespecti­ve of what has generated the motion”

How long until this is in our homes? MatchPoint is still a very early stage prototype and we are looking into commercial­isation opportunit­ies, but it’s a little bit too early to say at the moment.

 ??  ??
 ??  ??
 ??  ?? Christophe­r Clarke is a PhD student at Lancaster University
Christophe­r Clarke is a PhD student at Lancaster University
 ??  ?? BELOW Users can use any object to interact with the MatchPoint system – even a cup of tea
BELOW Users can use any object to interact with the MatchPoint system – even a cup of tea

Newspapers in English

Newspapers from United Kingdom