What Is Implicit Bias: Adam Madva Explains at Philosophy Colloquium

What Is Implicit Bias: Adam Madva Explains at Philosophy Colloquium

By Alisa Samadani, Staff Writer Originally published in Issue 1, Volume 32 of The University Register on Friday, September 13, 2019

Every year, the UMM Philosophy discipline hosts distinguished speakers who are experts in a general topic of interest. This year’s topic is implicit bias. For those who are not familiar with the term, implicit bias is the subconscious set of associations that an individual has about a social group. There were two discussion forums and two guest lectures this week, as well as two guest lectures in late April. The first lecture of the colloquium, held Monday, September 9 in the Cow Palace at 7 p.m., was led by Alex Madva, assistant professor of philosophy at California State Polytechnic University, Poloma. In his lecture, Madva argues that understanding the persistence of implicit bias and how it works is key to understanding the resurgence of explicit bigotry. His big question was how we could become less biased in an already biased world. With explicit forms of prejudice having been on a decline in the past decade, one would expect that bias has started to become a thing of the past, when in reality, the problem is simply getting more out of hand. According to Madva, we are living in a time of unprecedented intergroup hostility and political division, even if said biases are not being publicly stated.

He then carried on with his lecture, explaining the various measures taken to identify implicit biases, with the most popular being the Implicit Association Test (IAT). To give us a better idea of how an IAT works, Madva had us look at a diagram with two sides: left/insects and right/flowers. He then presented us with a series of words that were either associated with flowers, such as “poppy” or “rose” or words associated with insects, such as “roach” and “beetle.” We were instructed to move our left or right hand depending on what word showed up on the screen, and the activity was fairly simple for the audience to perform. Then, we had a similar diagram, with left/bad and right/good. The words were either associated with bad things, such as “hate” or “war,” or good things, such as “peace” and “love.”This activity was just as easy for the audience to perform. After this exercise, we were shown a slightly more complex diagram, with left/bad/insects and right/good/flowers. There were now words associated with bad things such as “hate” or “war” on the left side accompanying the insects, and words associated with good things such as “peace” and “love” on the right side accompanying the flowers. This exercise was more difficult but not significantly challenging. Then the third diagram came up, with left/bad/flowers and right/good/insects. This time, there was more hesitation between identifying what side (left/ight) to raise, considering “good” was now associated with “insects” and “bad” was now associated with “flowers”. This feeling of hesitation, considering we were already used to a preexisting pattern, is what Madva referred to as implicit bias. Tests like these have been reinforced in order to identify the percentage of people who associate certain traits with certain social groups, as well as ethnic groups.

A good example of the IAT being used is the 2008 Rooth field study in Sweden. There were 1,500 job applications sent in to a business, with half of the names being traditional Swedish names and half of the names being typical Arab-sounding names. Other than the names, the skills and accolades on the job applications were nearly identical. Overall, applicants with Swedish names were three times more likely to get called back for an interview. Three months later, the researchers requested that the employers took an IAT, and results showed that employers with implicit associations between Arab-Muslims and traits such as “lazy” and “incompetent” predicted fewer callbacks from Arab-Muslim applicants.

But what can we do about implicit bias? Madva explains that recognizing implicit bias is important, but the next step is harder: solving the dilemma of defeating bias when it is all around us. He suggests a few ways out of this dilemma, one of which is suppression, where we try to suppress our biases. This would be the least effective, as it would make working to avoid biases more tedious, and simply knowing what others believe will eventually lead us back to act on old biases, even if we personally disagree with said biases and stereotypes.

Another way to decrease the amount of biases we hold is ignorance. As the old saying goes: “Ignorance is bliss.” Ignoring stereotypes would make people less biased, so in theory, it sounds like the best way to recover from the recent influx of implicit bias. However, this couldn’t be further from the truth. The downside of ignorance is that we as a society will not be able to recognize when there is discrimination at hand, so people will be unable to identify unjust social behavior on the basis of implicit bias and social statistics.

Madva claims that the issue is in the accessibility of implicit bias and stereotypes. They are always accessible, like a bad song in your head. Our real aim should not be to dismiss all forms of stereotypes, but to think of them only in the right contexts. Can we change the accessibility of stereotypes and implicit bias? Madva claims that it is possible with the creativity mindset. If we all think about it, stereotypical thinking is typical thinking. There is no originality, only society’s default of how things should be. If we think more openly and creatively about the world, implicit associations are more likely to go down.

The public lecture closed at roughly 8 p.m., with time left for students, guests, and faculty to ask questions. Questions ranged from how Madva got into his line of work (to which he responded, he really does not have a solid answer) to questions that delved deeper into points in his lecture. The question and answer portion of the public lecture ended at roughly 9 p.m.