Face blindness, communication, and neurodiversity in digital social life.
“It’s so nice to see your face” is a phrase I’ve heard more in the past three weeks than I have over the sum of the last several years. When our lab shut down, we were, along with millions of others, plunged into a panicked state of suspended animation. Each morning we would check the news for newly diagnosed cases of Covid-19, new restrictions on our movement, and ad hoc guidelines for our behaviour that all seemed to come in an uncensored contradictory flood.
During this time, we worked to hastily determine who would be permitted to return to campus; to select among dozens of ongoing projects, representing years of work, only those that were “essential” to continue moving forward with. We communicated sporadically via e-mail or over the phone, when it was necessary or urgent, as daily updates informed us of how we would need to change the operation of the lab in the immediate short term. Coming together in the way we usually do in our weekly lab meetings, gathered around data figures, discussing business as usual, felt like an impossible parody in this period of time when anything further than a few weeks into the future felt so uncertain.
So after the dust began to settle, and a few concrete guidelines were set out as way posts in the sudden uncharted expanse of free time we now occupied, we started to coalesce around our old routines in virtual space.
I’ve often used this phrase without really thinking about it, upon reconvening with old friends or roommates after vacations or work or simple inattention have separated us. But the sight of a familiar face is a unique kind of social comfort that’s easy to take for granted until it’s abruptly removed.
Like many people, simply picking up the phone when I crave someone’s company is a source of anxiety rather than comfort. Without the subtle, nonverbal clues I rely on in person, I feel much more exposed, and much more likely to misspeak or ramble. It’s easier to see how someone really feels about what you’re saying to them if you can monitor the things they aren’t consciously monitoring themselves. Half smiles, raised eyebrows, or an uncomfortable shift in a chair are all subtle gestures that chart the course of a conversation, inspiring dynamic pivots that point you in a direction that allows you to connect more deeply.
So while it may be an imperfect facsimile of the real thing, I appreciated being able to see the faces of my lab mates once we worked out a way to come together again online.
Faces are so central to many of our social lives that it might be hard to conceive of other ways to anchor our interactions without being able to put faces to names. For people with a condition called prosopagnosia, more commonly referred to as face-blindness, faces might appear as disparate assemblies of face-like structures that can’t readily be associated with the people to whom they belong.
Oliver Sacks, the beloved chronicler of a fascinating array of divergent human experiences, himself lived with this condition. Beyond simply calling someone by the wrong name, he describes brief accidental interactions with his own reflection, or being unable to place colleagues despite having worked closely with them for many years. Prosopagnosia isn’t the inability to recognize faces in and of themselves, but the inability to match the unique geography of an individual’s face with the emotional memory of them. Both components are what make up the composite of a person’s identity. As such, other landmarks, like idiosyncratic ways of walking, or distinctive haircuts, might be necessary to build up a strong impression of a person.
Prosopagnosia is just one of many different facial recognition disorders. In some, which appear more frequently in Parkinson’s disease and related conditions, the image of someone’s face becomes uncoupled from emotional memories, causing loved ones to appear as uncanny copies of their real counterparts. In others, people may seem to change their appearances fluidly.
These conditions occasionally arise from very specific kinds of brain damage, but often there are no such obvious underlying physical causes for these conditions, so historically they’ve been studied from a psychological perspective. It’s become increasingly apparent in recent years, though, that while certain parts of the brain do have distinct functions that can be studied on their own, how these parts communicate with each other is equally important.
For many of us, transitioning to a redefined workplace where video conferencing is our only means of real-time collaboration has been a clumsy and improvisational process.
For those with prosopagnosia, common workplace interactions like group meetings or conferences are fraught with the potential for embarrassing misidentification of long-time friends or colleagues. Interacting in these spaces in the real world may require the creative use of other mnemonics or clues to someone’s identity. In this respect, a virtual conference might actually be an easier social landscape to navigate; here, people’s images appear on the screen conveniently labeled with their names. Everyone’s gaze is directed toward you and the ordinary din of a large crowd is calmed when we all need to speak in turn in order to be heard and understood.
Not only prosopagnosics, but anyone who lives with a condition that increases their sensitivity to sensory stimuli might find new challenges but also unique slivers of respite in a novel, virtual social world.
The autism spectrum is made up of a broad range of symptoms that can vary widely from person to person. These symptoms might manifest as indifference to unspoken social cues or difficulty matching the emotions to facial expressions. Ultimately, these factors combine to make social situations an uncomfortable prospect for many people with autism. Other sensory factors are at play too, but since faces are such cornerstones of socialization and intimacy for many of us, it’s been suggested that differences in facial processing at the neurological level might explain some of this anxiety as well. Functional MRI studies that focus on autism have shown reduced activity levels in the fusiform face area, which is thought to be the seat of facial awareness in our brains, but in other areas with more general functions. The amygdala, which directs our fight-or-flight responses, and an area called the ventrolateral prefrontal cortex, which drives complex decision making, seem to also play a role in shaping differences in the interpretation of a face.
Collectively, the broader network of structures that help us make sense of faces is called the subcortical pathway. Because it plays a starring role in fear and stress, the involvement of the amygdala in this pathway suggests that we rely on it to help us recognize people who might pose a threat. Some studies have actually shown activity in the amygdala to go down in autistic participants, but these studies might not have taken eye movement and gaze direction into account. This turns out to be a key detail, since people with autism often have difficulty keeping steady eye contact.
When this detail is examined more closely, the opposite actually seems to be true. Individuals with autism and neurotypical controls were again tasked with reading different emotional states in faces while their neural activity was recorded in an MRI scanner. This time, however, their gazes were trained toward the eyes of the images they were evaluating. This adjustment revealed much higher activation of the threat-sensitive subcortical pathway, especially in the amygdala. The avoidance of direct eye contact, the authors suggested, might not in fact be due to disengagement or inattention but could be an unconscious strategy to help alleviate some of the tension this pathway ignites.
Many of us who have recently been tossed into the Zoom deep-end have probably experienced what’s known as gaze parallax. This occurs when we attempt to look into the eyes of whomever is speaking, but because we aren’t looking into our cameras, they don’t perceive our attempted eye contact.
We appear instead to be staring intently at the speaker’s nose, or some distant point above their head.
So for the time being, we’re all in a position where we can’t rely on eye contact as the social cue that it would be face-to-face. For many of us, this might feel like a jarring distraction until we’ve fallen into enough of a rhythm that it feels more natural. For people on the autism spectrum, though, the gaze parallax problem might be an unexpected relief from a social protocol that usually requires active self-monitoring and adjustment to maintain.
A group of researchers at Microsoft recently interviewed people with autism to better understand how neurodiverse users interact in virtual work environments. One participant notes this perpetual gaze-alignment problem with relief, no longer having to force uncomfortable eye contact in deference to politeness. On the other hand, many also described a heightened consciousness of their own behaviours in a situation where focus on the speaker was so unnaturally constrained. Virtual meetings might also limit the amount of control one has over the social environment, since the environment on the opposite end of your monitor is so far removed and therefore harder to form a complete picture of and plan your reactions accordingly.
Most of our insights into neurological disorders arose out of case studies of acute trauma to specific areas of the brain. Indeed, we understand the basic functions of the brain by deliberately damaging or turning off the sections we’d like to learn more about, and then inferring their function based on the behaviors we see in mice or flies.
Prosopagnosia and autism are unique, in that in the vast majority of cases there isn’t a single specific region that appears to be responsible for these conditions. While a small number of cases of prosopagnosia can result from specific injuries to facial recognition centers, many people have difficulty recognizing faces from birth, with no apparent biological basis.
In cases like these, more creative approaches are needed to gain meaningful insight into their neural underpinnings.
One approach is to look at what’s called functional connectivity, which identifies how activity propagates through networks, rather than focusing on how a single part of the brain reacts to a given task.
Recently, functional MRI was used this way to chart a functional road map of prosopagnosia. Regions were identified as nodes within a broader network of activity, and patterns of activation were compared between groups as they viewed faces that evinced specific emotional states. The researchers learned that a cluster of structures in the visual cortex usually associated with general processing of the superficial structures of faces were more highly connected and activated more strongly in those with prosopagnosia. In contrast, in control subjects, a region called the anterior temporal cortex seemed to act as a centralized junction where these visual impressions were routed through emotional processing regions.
The anterior temporal cortex is something akin to the brain’s information desk, compiling all of the facts we learn over the years about people and places. If this region is a central processor for biographical information about faces in control subjects but not in those with prosopagnosia, this might support the experience many prosopagnosics describe of being unable to put specific faces into individual contexts, even though their objective recognition of faces in general is intact.
For many years, research science and psychology seemed to approach the brain as a jigsaw puzzle, where neurological disorders represented incomplete puzzles with missing pieces that obscured a larger preconceived picture. It’s easier to conceive of a singular home for facial recognition, fear, or memory, than to imagine these states as continuous variables that can’t be localized to any one specific place.
With some imagination, we could instead consider the disparate regions of the brain as stars in a constellation, existing in many millions of possible configurations to generate an individual’s perception of the world, which in turn governs how move through it.
New ways of social interaction are forcing us all to the outer edges of what is normal or comfortable for us. Perhaps we can also move toward new definitions of the mind, thinking of the brain less in static mechanical terms and more as a dynamic and malleable entity. By these metrics, we might also move away from traditional definitions of ‘normalcy’ and ‘disorder’ in clinical neuroscience and psychology.
Instead, we can simply seek to understand understand how each of us, creatively, adapts to the idiosyncrasies of our own neural wiring.