Explicit and Implicit Bias against atheists in America.
(So this is a empirical research paper I did in school this past quarter, minus a couple little things that are too much of a pain to include here. If you want to see the full, actual paper, you can download it here.) (Look, I realize that this is inordinately lazy of me, and I should do a little work to transfer it from APA style to ... blog style. But I did write this paper, and it was a lot of work getting it just so, and I'm a damn good writer, and it shows, so this relative breeze compared to most of the stuff you have to slog through in peer reviewed journals. So deal with it. Or just ignore. But it's actually pretty damn fascinating stuff.)
Abstract
Previous studies show that Americans have an explicit bias against atheists. This study measured explicit and implicit attitudes toward atheists among 36 undergraduate students at the relatively secular University of Washington. We measured implicit attitudes using the Implicit Association Test pairing the concepts “atheism” and “spiritual” with the concepts “good” and “bad”. We measured explicit attitudes with a self-report questionnaire using Likert scales. As predicted, participants’ explicit and implicit attitudes diverged. Students expressed explicit neutrality toward atheists, but they took significantly longer to associate the concept pairs “atheism/good” than “atheism/bad”, indicating an implicit bias against atheists. The results reflected the difference between attitudes in a secular University setting and culture-wide attitudes, as well as a time of cultural transition. (Read More ...)
Students have an Implicit Bias against Atheism and Preference for Spirituality, in Contrast to their Explicit Attitudes.
Atheists are the most distrusted minority in American culture. Seven
In the present study, we used the well-validated Implicit Association Test (IAT) to measure implicit bias against and preference for atheism or spirituality, and we compared implicit and explicit attitudes toward atheism. The IAT is a method for measuring the strength of automatic or unconscious (implicit) associations between ideas or concepts in memory. It specifically measures “the relative strengths of four associations involving 2 pairs of contrasted concepts” (e.g. spirituality/atheism and good/bad) (Nosek, Greenwald, & Banaji, 2005). The results of the IAT are based on the assumption that a stronger association between 2 concepts allows a participant to more quickly make the same behavioral response for items belonging to these two categories than for items belonging to two more weakly associated categories. The IAT asks participants to sort lists of items clearly belonging to one of four categories into two pairs of categories using one response key for each pair of categories. In this case, subjects were asked to press “I” for items belonging to the categories “atheism” or “bad” and to press “E” for items belonging to the categories “spirituality” or “good”. Then the pairings of the categories were reversed (e.g. “atheism or good” and “spirituality or bad”). Longer or shorter response times for a category pair are understood to indicate comparatively weaker or stronger cognitive association.
Knowing about such implicit mental associations is an important tool for understanding and addressing attitudes and stereotypes (Greenwald & Banaji, 1995). For instance, Dovidio, Kawakami, and Gaertner (2002) found that a white person’s explicit attitudes significantly predicted verbal bias and self-perceived friendliness in interaction with a black person. Implicit attitudes, on the other hand, as measured by a test similar to the IAT, significantly predicted non-verbal friendliness and confederate-perceived bias in these interactions. While many people explicitly support egalitarianism across races, because this is a relatively recent societal/cultural change they often continue to have implicit bias against black people (Dovidio, Kawakami, & Beach, 2001). We wanted to explore the analogous relationship of explicit and implicit attitudes and stereotypes toward atheists in American culture.
In this study, we compared implicit and explicit attitudes toward atheism and spirituality among a sample of third and fourth year undergraduate psychology students at the University of Washington in Seattle. Based on prior research, we hypothesized that participants would have an implicit bias against atheists. We also hypothesized that because the culture of the University of Washington, where we drew our sample, is so focused on egalitarianism (two of the UW’s six explicit values are respect and diversity), and because the University is located in Washington, the second most secular state in the nation (Kosmin, Mayer, & Keysar, 2001; Some thing, 2007; Figure 1), such a bias would be small or absent in the participants’ explicit attitudes toward atheists. We used the IAT to measure implicit attitudes, and hypothesized a longer reaction time in categorizing word to the pair of categories “atheism or good” than to the pair of categories “atheism or bad”, in accordance with culture-wide attitudes in the U.S. (Edgell et al., 2006;
Methods
Participants were 36 third and fourth year undergraduate psychology students at the University of Washington, Seattle. They ranged in age from 20 to 27 with a mean of 21.8. 71 percent were females. They were drawn from Psychology 331, a psychology lab class on human performance, and participated in exchange for reciprocal participation in their own experiments for this class.
Participants sat in front of computer terminals and were asked to first fill out a self report questionnaire which collected demographic data and contained 10 questions with Likert scales used to assess participants’ explicit attitudes toward atheists (Appendix A). Then participants were asked to complete an IAT test on the computer by following the instructions presented on the screen.
The experiment was a within subjects design with one independent variable (construct pairings) having two levels (spiritual/good; atheism/bad and spiritual/bad; atheism/good). The dependent variable was reaction time to categorize words to these two pairs of constructs
The IAT test was designed using Inquisit software and scripts from Tony Greenwald’s home page (Greenwald, 2007) in accordance with a 2007 empirical review of the test (Nosek, Greenwald, & Banaji, 2007). Participants were instructed that they would be asked to categorize items to categories displayed at the upper right and left of their screen by pressing “I” for the right category or “E” for the left category. The instructions at the beginning explicitly displayed a list of the four categories with the items belonging to each (Appendix B).
Results
All data was imported into Microsoft Excel for analysis. Reaction times were compared using a within subjects t-test. Correlation of reaction times with explicit attitudes was calculated using linear regression.
As predicted, subjects took significantly longer to categorize items to the incompatible construct pairs “spiritual/bad, atheism/good” (M = 1103.41, SD = 260.95) than to the compatible construct pairs “spiritual/good, atheism/bad” (M = 859.84, SD = 235.040), t(35) = 5.77, p < .001 (Figure 2).
On a seven point Likert scale where seven meant “strongly agree”, one meant “strongly disagree”, and four meant “neutral”, participants explicitly expressed neutrality in response the statement “I prefer individuals who are spiritual to individuals who are atheists” (M = 4.43, SD = 1.36). Participants agreed with the explicit statements “I am personally motivated by my beliefs to be non-prejudiced towards atheism” (M = 2.97, SD = 1.70) and “Because of my personal values, I believe that using stereotypes about atheists is wrong” (M = 2.40, SD = 1.48).
There was no correlation between the reaction time difference and the answers to any of the ten Likert scale questions on the self-report questionnaire (r (30) < .15, p > .50).
Discussion
This study investigated implicit and explicit attitudes towards atheists among students in a secular university setting, using the IAT to measure implicit attitudes and a self-report questionnaire to measure explicit attitudes. Our results supported our three hypotheses of little or no explicit bias against atheists, significant implicit bias against atheists, and no correlation between these implicit and explicit attitudes. As predicted, participants took significantly longer to associate the pair of constructs “atheism or good” than the pair of constructs “atheism or bad”. This indicated an implicit bias against atheism, as demonstrated with the strongly validated IAT. Furthermore, overall our participants expressed no explicit bias against atheists, as indicated by their average Likert scale scores on the self-report questionnaire we used. Finally, there was very little correlation between participants’ implicit and explicit attitudes.
The implicit bias against atheists which we found reflects the American cultural bias against atheists found in the extensive study by Edgell et al. (2006). The lack of explicit bias against atheists among our subjects is indicative of both the religious makeup up of Washington, which has the second highest percentage of non religious people of any state in the nation (Kosmin, Mayer, & Keysar, 2001; Some thing, 2007; Figure 1), as well as the religious makeup of our subjects themselves—38 percent, or 14 out of 37, either disagreed (six out of seven on a Likert scale) or strongly disagreed (seven out of seven on a Likert scale) with the statement “I associate myself with an organized religion.” The divergence between implicit and explicit biases against atheists is similar to the same such divergence of biases against black people found by Dovidio et al. (2002). This divergence reflects the transition between culture-wide, long standing biases and a time of cultural shift in which religious tolerance is considered desirable and is growing (Edgell et al., 2006; Dovidio et al., 2001). The strong divergence between our participants’ clear explicit neutrality toward and highly significant implicit bias against atheists demonstrates the way in which our best intentions, as reflected in our explicit attitudes, can, as in the friendliness study by Dovidio et al. (2001), be operating against the underlying biases which we have imbibed from our culture as a function of growing up here.
Our results are somewhat restricted by geographical location and by the demographic of our participants, which did not reflect national averages in terms of religious versus non-religious affiliation. In the U.S. nationally only 14 percent don’t identify with a religion (Kosmin et al., 2001), whereas 38 percent of our subjects did not identify with a religion. Our participants, however, may more accurately reflect the future U.S. national demographic, as Kosmin et al. (2001) also found that the non-religious were the fastest growing “religious” group in the nation between 1991 and 2001, moving from 8 percent to 14 percent. This possibility in some ways makes our results even more salient for pointing to promising future research. One interesting further line of research would be to replicate the study by Dovidio et al. (2002) comparing how implicit and explicit biases predict verbal and non verbal friendliness as well as self-perceived and others-perceived bias in interactions with people who are known to be atheists or non-religious. Another interesting future study would replicate our study in a geographic location or with a group of participants with a much higher percentage of religious affiliation.
References
Dovidio, J. F., Kawakami, K., & Beach, K. R. (2001). Implicit and explicit attitudes: Examination of the relationship between measures of intergroup bias. In R. Brown & S. Gaertner (Eds.), Blackwell handbook of social psychology: Intergroup processes (pp. 175-197). Malden, MA: Blackwell Publications Ltd.
Dovidio, J. F., Kawakami, K., & Gaertner, S. L. (2002). Implicit and explicit prejudice and interracial interaction. Journal of Personality and Social Psychology, 82, 62-68.
Edgell, P., Gerteis, J., & Hartmann, D. (2006). Atheists as "other": Moral boundaries and cultural membership in american society. American Sociological Review, 71, 211-234.
Greenwald, A. G. (2007). Generic iat zipfile download. Retrieved November 15, 2007 from http://faculty.washington.edu/agg/iat_materials.htm
Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102, 4-27.
Kosmin, B. A., Mayer, E., & Keysar, A. (2001). American Religious Identification Survey. Retrieved November 21, 2007, from The Graduate Center, The City University of New York Web site: http://www.gc.cuny.edu/faculty/research_briefs/aris/aris_index.htm
Newport, F. (1999). Americans today much more accepting of a woman, black, catholic, or jew as president: Still reluctant to vote for atheists or homosexuals. Retrieved October 30, 2007 from http://www.gallup.com/poll/3979/Americans-Today-Much-More-Accepting-Woman-Black-Catholic.aspx
Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2005). Understanding and using the implicit association test: II. methodological issues. method variables and construct validity. Personality and Social Psychology Bulletin, 31, 166-180.
Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The implicit association test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Automatic processes in social thinking and behavior (pp. 265-292). Psychology Press.
Ontario Consultants on Religious Tolerance (2007). Religious discrimination in u.s. state constitutions. B. A. Robinson (Ed.). Retrieved November 14, 2004, from http://www.religioustolerance.org/texas.htm
Some thing, (2007). Image:Religious Belief in North America.png. Retrieved November 21, 2007, from Wikipedia, The Free Encyclopedia Web site: http://en.wikipedia.org/wiki/Image:Religious_Belief_in_North_America.png
7 comments:
I really enjoyed reading this- thanks for sharing it with us! I don't have much to say about it, really, though I do think atheists are quite courageous. Still, I'm sure I have some implicit bias towards them, explicitly expressed or not. I wonder how the test would work in reverse- like... if you tried "fundamentalism" or "evangelical" in the place of spiritual, or something. I bet the explicit bias against fundamentalism would be much stronger... hmmm... that would be a really fun study to do. Or interesting anyway :) Thanks for giving me something for my brain to chew on.
Beth,
I'm guessing that people in seattle would definitely have an explicit bias against fundamentalism. But I don't think that would hold true nationwide.
As for implicit feelings about fundamentalism--I guess it would depend on you how defined it. If we're talking christian fundamentalism, I think it could go either way, nationwise, with the implicit bias.
Benjamin,
An interesting piece of work, although I have a couple of problems with the way you have used the statistics.
First, your sample is very small and self selecting. I therefore do not think you can really extrapolate any further than your Psychology 331 class. I do not think you can argue it is indicative of the University of Washington or wider without further research.
Second, whilst I have no knowledge of the methods you used, I would doubt the standard deviations on such a small sample.
Normal statistical methods require randomised samples, and you cannot really claim to have had that.
I really wouldn't reference anything from wikipedia. Interesting for background, unsubstantiated in a degree level piece of classwork.
Are you suggesting there is a significant difference between the reaction times of 'compatible' and 'incompatible' in figure 2?
Joe,
Thankyou for reading and gently criticizing =)
You’re right that our sample is small and self selecting. I address that a little in the discussion. However, I think your conclusions about that are a bit too restrictive. My understanding is that in the social sciences, once you have N=30, you’re ok to make slightly broader conclusions. Anyway, I think it’s totally safe to extrapolate at *least* out to the whole UW, since we had significance at the .001 level. Actually it was better than that. They only let you report down to .001, but our difference was significant to .00006.
You only have to randomize if you’re using a between-subjects design. Since we used a within- subjects design, all our participants experienced both levels of the independent variable, so that does away with extraneous within-each-subject deviation.
I get cranky at all the academic snootery about not citing from Wikipedia. Not angry at you. But it’s still considered unacceptable, as you point out. IMO, that’s just stupid. I mean the figure I cite from Wikipedia is just a really useful figure created from a totally academically acceptable empirical study from the City University of New York. The guy just took their data and created a kewl map from it so you can *see* that Washington is the most secular state. The people at CUNY should have made such a figure for their study. So I have no problem citing it. 30 years from now, citing Wikipedia will be totally acceptable, so I’m just way ahead of the curve.
Sorry about the figure. Yes, the difference is, as I said, very highly significant: p = .000057.
I’m pretty sure I’m gonna get like 59 out of 60 on this paper (based on the scores on I got on parts of it that we had to turn in earlier)(god, what if my prof is following here, and decides I'm too arrogant, and docks my score?), so it’s definitely up to snuff by UW standards. And UW is one of the premier research institutions in the nation. So I think my conclusions are probably fairly broadly defensible.
I disagree with your conclusions based on the significance level. You've not measured the test in the wider UW community so the significance level is not relevant in that respect and cannot be used to make that conclusion.
It could be an entirely random effect whereby your group is more/less inclined to believe something than the rest of the student body. Impossible to say without doing more research.
I also like wikipedia, but it is not a touch on a peer-reviewed article. Given there are so many journals around, you really shouldn't need to quote anything from it IMO.
Joe,
I guess we just disagree. There's no way to actually be for sure without doing the actual larger study you suggest.
However, I think most social scientists would disagree with you. I mean you seem to be arguing that results are never generalizable to a larger community that whose characteristics are in any way different from the subject pool. Another way of putting that is: You seem to be saying that there are very possibly, or even probably, large enough differences between our subject pool (a 3rd year psych class at UW) and the University student body at large as to not make our results generalizable.
So I guess my question is: what differences are you hypothesizing? That is, why are 36 3rd and 4th year undergrad psych students so very different from 30,000 undergrad students at the university as to make our results non-generalizable?
Benjamin,
I am not a statistics professional. However, my wife is - being an academic statistician at one of the best universities in Britain. I am sure she would be very happy to give you some hints if you're interested.
As a novice (albeit one with two science degrees), I can tell you that there are ways to use statistics to estimate factors in wider populations. That is a major purpose of statistics after all.
However, it is fairly clear that your study does not do that for the reasons I've outlined above.
For the record, this is not supposed to imply a) that your work is bad b) that your review of the published work is imperfect (actually I was pretty impressed with that aspect of it) c) that your knowledge of the methods in your subject are wrong or even d) that you've made a mistake in calculating your statistics (although as I said, I still suspect you've made an error in calculating the standard variance).
If you stand back and think about it, common sense will tell you that you cannot use a self selecting group to make statements about a wider group.
Check out your method - use the same test with other key words and see what the standard variation between groups is as you've not allowed for background variation in the method.
Finally, I'm not doubting your intelligence or that you've got flair in your subject. But there is a lot of bullshit science about. You've really got to learn to cut through the crap and see if the actual science backs up the conclusions. Have you surveyed a representative sample of the UW population? Is the UW representative of the state as a whole? Beware of making data say more than it says. You've done some good work on attitudes in Psychology 313 which might lead you to thinking how you can do more to study it in the wider university. Don't overstretch yourself otherwise you just end up looking silly.
I've been there mate. Better to be honest and just say 'although this study is imperfect, it indicates possible areas of future fruitful research' than to be torn apart by someone who you've overlooked and has been studying the area for their whole life.
Oh yes. That is seriously embarrassing.
Post a Comment