Show me how you look at something and I will tell you what you are thinking of
TU researchers “read” involuntary eye movements
When we recall a particular image or situation, our eyes make completely involuntary movements. Very often these reflect our actual eye movements when viewing the image itself. This is something that has been known for some time. It is precisely these eye movements which scientists from Dr. Marc Alexa’s research group are now further exploring. Dr. Alexa is professor for computer graphics at TU Berlin. Together with fellow researchers from Universität Regensburg and the Georgia Institute of Technology, they are investigating how far it is possible to recognize which image a person is thinking of from observing their involuntary eye movements. The researchers will be presenting their findings at the CHI 2019 conference in Glasgow.
If you have ever tried to find a particular vacation photo from the thousands of images stored in your computer, then you will know just how long this can take. “Imagine how much easier it would be if you just looked at a blank screen, recalled the image and the computer finds it for you in its database,” says Marc Alexa outlining a possible application for his research. “Our overriding goal is to use involuntary eye movements in the future as a new form of human-computer interaction.”
We already know that eye movements while recalling an image are a type of muscle memory and function as a kind of spatial index or even muscle memory. “We know that our gaze when viewing an image fixates on specific features, resulting in a particular pattern of eye movements,” Professor Alexa explains. The researchers used a randomly selected set of 100 images for the experiment. These images were shown in a random sequence and in a standardized procedure to 30 test participants. The participants’ eye movements were recorded using a special camera. The researchers then showed the participants a blank, neutral screen and asked them to recall a particular image. Their eye movements were also recorded while recalling the image.
“It showed us that involuntary eye movements essentially follow similar patterns and assume similar points of fixation when recalling an image as when actually viewing that image. The spatial disposition of the points of fixation, the points which the eye fixates on, is, however, smaller during recall than when viewing,” Professor Alexa observes.
With the help of machine learning methods, unique signatures were assigned to the gaze patterns while viewing an image. Just by using these gaze pattern signatures, the researchers were able to identify which image from a database the participant was looking at. “But not only this: We were also able to demonstrate that recall gaze patterns also enable us to draw conclusions about the actual image with a probability which goes way significantly beyond coincidence. To our knowledge this represents the first quantitative evaluation of the extent to which we can use information provided by involuntary recall eye movements to identify the original image,” says Marc Alexa.
The system still needs perfecting. The researchers are currently working with a very limited number of images and it is not clear to what extent the findings can be scaled to enable the recognition of images from an arbitrarily large quantity of possible images. There is still some way to go to achieve the actual goal of a new form of computer-human interaction. “We will, however, be continuing our research to exhaust the full potential of such methods,” states Marc Alexa.
Further information available from:
Professor Dr. Marc Alexa
Faculty IV Electrical Engineering and Computer Science
Tel.: +49 30 314-73100