The eyes have it
Scientists build cameras for tiny robots based on spider vision
“THE METALENS SPLITS THE LIGHT AND FORMS TWO DIFFERENTLY DEFOCUSED IMAGES SIDE-BY-SIDE”
If we want to build robots that can interact with the real world, they have to be able to take the 2D images recorded by cameras and turn them into 3D maps. The tech to do this exists – facial recognition in smartphones relies on depth perception to chart your features – but engineers want to make it smaller and more efficient for use in microrobotics, augmented reality and wearables tech.
Of course, evolution has already has a solution: the eyes and brain of the tiny, but formidable jumping spider. A team at Harvard University studied the arachnids to understand how, with their relatively small brains, they manage to accurately and rapidly pounce on unsuspecting flies
Each principal eye of a jumping spider hosts not one, but several retinas arranged in layers. Each has its own focal length, so a fly in the spider’s vision will appear sharper in one retina, but blurrier in the others. This information is sent to the brain, where a quick calculation about the difference in acuity between the images is made, and this tells the spider how far away the fly is. The Harvard researchers have replicated this system with a ‘metalens’. This new material can produce multiple images with several focal points from just one surface. “The metalens splits the light and forms two differently defocused images side-by-side,” explains Zhujun Shi, who co-authored the paper. An algorithm then quickly interprets the differences and creates a depth map of the scene. In this way they’re able to mimic the spider’s efficiency and speed, and one day will fit the camera to small robots or smartphones.