Optical Illusion Combines Phi Phenomenon and Oscillating High Contrast Patterns
A new optical illusion is making its rounds for the enjoyment of
netizens. The original version, which first appeared in 2020 and
attributed to Japanese digital artist Jagarikin, displays a pair of rotating blue-and-yellow circles
each encompassing one or four arrows that change directions, with the
direction that the arrows point to influencing the perception of the
forward motion of the circles. Other variations of the illusion have
been created since then, including a black and white version and a version in rainbow colors. (A variation using Necker cubes
also seems to be related.) The latest version in rainbow colors has
been dissected by viewers to demonstrate, for example, that the illusion
persists even when the arrows are removed and that the circles are indeed stationary.
For cognitive—in addition to visual—entertainment, curious observers
have also investigated the underlying properties that give the illusion
its effect. The first is the phi phenomenon,
which most of us are familiar with in the form of animated films. In
its simplest instantiation, spots moving in succession in the form of a
circle create the illusion of forward motion. (In a related phenomenon,
called the reverse phi phenomenon,
if the second point becomes light rather than dark, then we perceive
the motion as moving in the opposite or reverse direction.) Other
elements of the optical illusion perhaps include the Müller-Lyer illusion (as seen in a star formation
here), wherein varying the direction of arrowheads influences the
perception of length. (Additionally, the version in black and white
seems to make use of the barberpole illusion.)
And finally, it has been noted (in still frame) that each circle is
flanked by inner and outer edges with colors that contrast with the body
of the circle. The high contrast suggests that the subtleties of the
illusion also rely on oscillating positive-negative patterns, for
example as seen in two-stroke or four-stroke apparent motion.
For both visual and cognitive reasons, optical illusions provide a
perplexing but fun reminder of the complex, and sometimes inaccurate,
ways in which our visual systems represent the world we see.
Eye-Tracking Software Developed for fMRI
Viewing behavior can provide meaningful information about neurological
health. As such, eye-tracking technology can be clinically relevant in
the diagnosis and management of neurological injury. Typically, this
eye-tracking comes in the form of sensor technology, in which infrared
light is projected onto the retina, reflected, and then measured by the
sensor. Although functional magnetic resonance imaging (fMRI) is the
gold standard of functional brain imaging, MRIs use strong magnets and
integrating MRI-compatible camera systems often comes at a high cost.
This has thus far prevented the widespread use of eye-tracking in MRI
exams. Researchers at the Max Planck Institute in Germany sought to
improve upon eye-tracking availability by directly applying software to
fMRI. These researchers developed a software called DeepMReye, a
convolutional neural network (CNN) that decodes
gaze position from the magnetic resonance signal of the eyeballs.
Notable features of this technology include the fact that it performs
cameraless eye-tracking during an fMRI scan, and works even in existing
datasets and when the eyes are closed. First author of the study
explains, "The neural network we use detects specific patterns in the
MRI signal
from the eyes. This allows us to predict where the person is looking."
The software is trained on both publicly available data and study
participants to now be able to perform eye-tracking on data that it was
not trained on, such as existing MRI imaging that was previously
acquired without eye-tracking. Because the software can predict eye
movements even when the eyes are closed, it can facilitate studies of
individuals in a sleeping state or of individuals who are blind. In the
latter case, the researchers remark that whereas traditional
eye-tracking has suffered from calibration difficulties in blind
patients, "Here too, studies can be carried out more easily with
DeepMReye, as the
artificial intelligence can be calibrated with the help of healthy
subjects and then be applied in examinations of blind patients." They
have made the DeepMReye software an open source application for other researchers to use in the hopes of making eye-tracking more widespread in MRI examinations.
Drusen Formation Linked to Extracellular Vesicle Release by RPE Cells
Drusen are deposits that accumulate under the retina, between the
retinal pigmented epithelium (RPE) and Bruch's membrane of the
underlying choriocapillaris (of the choroid). The accumulation of drusen
signals the, often age-related, decline in RPE function in recycling
and maintaining photoreceptor health, leading to retinal diseases such
as age-related macular degeneration (AMD). Researchers have for the
first time observed evidence of RPE cells releasing Collectively, our results strongly support an active role of RPE-derived
EVs as a key source of drusen proteins and important contributors to
drusen development and growth."
Retinoids Explored as Treatment for Usher Syndrome
Usher syndrome type 1F (USH1F) is characterized by deafness, progressive
retinal degeneration, and vestibular areflexia. Its prevalance is
highest among Ashkenazi Jews, with carrier genes accounting for roughly
60% of their Usher syndrome type 1 cases. Thus far, there is no
treatment for the disease. In the 2000s, a few scientists began
collecting data about the natural history of USH1F disease progression,
enrolling 13 participants with USH1F to follow the
natural progression of their accompanying blindness over 20 or more
years. This longitudinal phenotyping revealed progressive retinal
degeneration leading to severe vision loss with macular atrophy by the
sixth decade, with half of the individuals being legally blind by their
mid-50s. Simultaneously, other scientists were working on a mouse model
of an Usher syndrome variant found in 13 of the patients in the natural
history project. The most recent work combined the research findings
that had been independently collected in the human subjects and the
mouse models. The collaboration led to new discoveries, such as
identifying the function of a previously identified gene, PCDH15, that leads to a shortened version of the protein protocadherin-15 (mutation Pcdh15R250X).
They found that protocadherin-15 helps light-dark cycle proteins move
back and forth between the different compartments of the eye's
photoreceptors, and is required in recycling of retinoids by the retinal
pigmented epithelium (RPE). Reduced levels of retinoid cycle proteins
(RPE65 and CRALBP) were found in mice with USH1F mutation. Next, the
researchers explored whether supplementing retinoids would improve
vision in these mice. They report that "[e]xogenous 9-cis retinal improved ERG amplitudes in Pcdh15R250X
mice." One of the researchers remarks, "There are currently
FDA-approved relatives of these retinoid drugs that
are available and have passed clinical trials for safety, along with
others that are in Phase II clinical trials to treat other types of
vision loss disorders." They hope to test these drugs in clinical
trials. Although the drugs will not recover lost vision, they might help
Usher syndrome patients with function of the retinal tissue that they
still have.
A Shared Neural Code for Recognizing Familiar Faces
The ability to recognize familiar faces is important in shaping social
interaction. Scientists wondered whether there is a shared neural code
for recognition of visually and personally familiar faces across the
brains of individuals who know each other. The study recruited 14
graduate students from the same PhD program (who had known each other
for at least two years) and obtained fMRI data of their brain activity
in three sessions. The researchers used two methods to study face and
identity perception: hyperalignment and between-subject classifiers.
Hyperalignment aligns participants' brain activity to a common
representational space to allow for comparing of similarities between
participants. Between-subject multivariate decoding uses machine
learning to predict what stimuli a
participant is looking at based on the brain activity of other
participants, here serving as a direct test for the presence of shared
information across the brains of different participants. In two of the
fMRI tasks, participants were presented images of four other
personally familiar graduate students and four visually familiar people
unknown to them. In a third task, participants watched parts of a movie.
Hyperalignment and between-subject classifiers were applied to this
data.
The results showed that the identity of visually familiar
faces was decoded with accuracy in brain areas involved in visual
processing of faces (e.g., the occipital face area and the fusiform face
area). However, the identity of personally familiar faces was decoded
with accuracy in brain areas involved in both visual processing and
social cognition; these additional brain areas include the dorsal medial prefrontal cortex (processes other people's intentions), the precuneus (personally familiar faces), the insula (emotions), and the temporal parietal junction (social cognition, theory of mind).
Stated differently, the identity of both visually and personally
familiar faces could be
decoded across participants from brain activity in visual areas, but
only the identity of personally familiar faces could be decoded
in areas involved in social cognition. One of the authors of the study
remarks, “It would
have been quite possible that everybody has their own private code for
what people are like, but this is not the case. Our
research shows that processing familiar faces really has to do with
general knowledge about people.” In other words, individually distinct
information about faces is encoded in brain activity that is shared
across brains. The researchers next plan to investigate how
shared person knowledge maps onto psychological dimensions and the
role of individual differences in mapping shared
representational space. First author of the study states, “Our findings and methodological approach might help elucidate impairments in social interactions for some classes of disorders.”
In the News
(1) Distinctive Voices Lecture: Seeing what isn't there (optical illusions)
(2) Optical illusions: colors and context
(3) Jays found to be sensitive to cognitive illusions
Saturday, November 27, 2021
Week in Review: Number 42
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment