Hartford Courant (Sunday)

The robot surgeon will see you now

Research shows automated robot can match a human in dexterity, precision and speed

- By Cade Metz

Hartford Courant | Section 4 | Sunday, May 16, 2021

Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.

As he moved the handles — up and down, left and right — the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console across the room.

But after this brief demonstrat­ion, Fer and his fellow researcher­s at the University of California at Berkeley, showed how they hope to advance the state of the art. Fer let go of the handles, and a new kind of computer software took over. As he and the other researcher­s looked on, the robot started to move entirely on its own.

With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Fer.

The training exercise was originally designed for humans; moving the rings from peg to peg is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even exceed a human in dexterity, precision and speed, according to a new research paper from the Berkeley team.

The project is a part of a much wider effort to bring artificial intelligen­ce into the operating room. Using many of the same technologi­es that underpin self-driving cars, autonomous drones and warehouse robots, researcher­s are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerati­ng.

“It is an exciting time,” said Russell Taylor, a professor at Johns Hopkins University and former

IBM researcher known in the academic world as the father of robotic surgery. “It is where I hoped we would be 20 years ago.”

Greg Hager, a computer scientist at Johns Hopkins, said that surgical automation would progress much like the Autopilot software that guides his Tesla down the New Jersey Turnpike. The car drives on its own, he said, but his wife still has her hands on the wheel, should anything go wrong. And she takes over when

it’s time to exit the highway.

“We can’t automate the whole process, at least not without human oversight,” he said. “But we can start to build automation tools that make the life of a surgeon a little bit easier.”

Five years ago, researcher­s with the Children’s National Health System in Washington, D.C., designed a robot that could automatica­lly suture the intestines of a pig during surgery. It was a notable step toward the kind of future envisioned by Hager. But it came with an asterisk: The researcher­s had implanted tiny markers in the pig’s intestines that emitted a near-infrared light and helped guide the robot’s movements.

The method is far from practical, as the markers are not easily implanted or removed. But in recent years, artificial intelligen­ce researcher­s have significan­tly improved the power of computer vision, which could allow robots to perform surgical tasks on their own, without such markers.

The change is driven

by what are called neural networks, mathematic­al systems that can learn skills by analyzing vast amounts of data. By analyzing thousands of cat photos, for instance, a neural network can learn to recognize a cat. In much the same way, a neural network can learn from images captured by surgical robots.

Surgical robots are equipped with cameras that record three-dimensiona­l video of each operation. The video streams into a viewfinder that surgeons peer into while guiding the operation, watching from the robot’s point of view.

But afterward, these images also provide a detailed road map showing how surgeries are performed. They can help new surgeons understand how to use these robots, and they can help train robots to handle tasks on their own. By analyzing images that show how a surgeon guides the robot, a neural network can learn the same skills.

This is how the Berkeley researcher­s have been

working to automate their robot, which is based on the da Vinci Surgical System, a two-armed machine that helps surgeons perform more than 1 million procedures a year. Fer and his colleagues collect images of the robot moving the plastic rings while under human control. Then their system learns from these images, pinpointin­g the best ways of grabbing the rings, passing them between claws and moving them to new pegs.

But this process came with its own asterisk.

When the system told the robot where to move, the robot often missed the spot by millimeter­s. Over months and years of use, the many metal cables inside the robot’s twin arms have stretched and bent in small ways, so its movements were not as precise as they needed to be.

Human operators could compensate for this shift, unconsciou­sly. But the automated system could not. This is often the problem with automated technology: It struggles to deal with change and uncertaint­y.

The Berkeley team decided to build a new neural network that analyzed the robot’s mistakes and learned how much precision it was losing with each passing day. “It learns how the robot’s joints evolve over time,” said Brijen Thananjeya­n, a doctoral student on the team. Once the automated system could account for this change, the robot could grab and move the plastics rings, matching the performanc­e of human operators.

Other labs are trying different approaches. Axel Krieger, a Johns Hopkins researcher, is working to automate a new kind of robotic arm, one with fewer moving parts. Researcher­s at the Worcester Polytechni­c Institute are developing ways for machines to carefully guide surgeons’ hands as they perform particular tasks.

“It is like a car where the lane-following is autonomous, but you still control the gas and the brake,” said Greg Fischer, one of the Worcester researcher­s.

 ?? SARAHBETH MANEY/THE NEW YORK TIMES ?? The da Vinci Research Kit conducts a peg transfer at a California lab in April.
SARAHBETH MANEY/THE NEW YORK TIMES The da Vinci Research Kit conducts a peg transfer at a California lab in April.
 ??  ?? Fer
Fer

Newspapers in English

Newspapers from United States