Your eyes can reveal more than you might think, as researchers can now use computer vision technology to reconstruct 3D images of a scene from the reflections on a person’s eyeballs.
Jia-Bin Huang and his colleagues at the University of Maryland, College Park, developed a computer vision model that takes between five and 15 digital photographs from different angles of an individual’s face while they look at a scene, and reconstructs that scene from the reflections in their eyes.
The method adapts a technique called neural radiance fields (NeRF), which uses neural networks to determine the density and colour of objects the computer “sees”. NeRF usually operates by directly looking at a scene, rather than viewing one reflected in a person’s eyeballs.
Huang’s version builds the scene by extrapolating from a square of, on average, 20 by 20 pixels in each eye. The method can produce what the researchers call “reasonable” results in replicating the real-life objects, though they are blurry because of the difficulty of rendering the shape of the cornea – the clear outer layer at the front of the eye.
When tested on clips from Miley Cyrus and Lady Gaga music videos, the technique was able to pick out the rough shape of objects in the singers’ eyes, but struggled to reconstruct details.
Huang and his colleagues declined to speak for this story, citing a policy by a conference the paper has been submitted to.
The work builds on research done by Ko Nishino and Shree K. Nayar at Columbia University in New York in the mid-2000s. “That work made a splash in showing how the surface of the cornea could be used as an approximation of a curved mirror to create panoramic images,” says Serge Belongie at the University of Copenhagen, Denmark.
“The new work extends this concept to the task of 3D reconstruction,” says Belongie. “The results are quite impressive and will make people – once again – think twice about what they’re revealing when they are photographed by cameras with ever-increasing resolution.”