tag:blogger.com,1999:blog-24335936.post945225173044892848..comments2024-01-11T04:31:58.232-08:00Comments on oxymoronredundancyparadoxtrap: Explicit and Implicit Bias against atheists in America.Benjamin Adyhttp://www.blogger.com/profile/03325520894212279303noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-24335936.post-77418311330999727702007-12-10T12:35:00.000-08:002007-12-10T12:35:00.000-08:00Benjamin,I am not a statistics professional. Howe...Benjamin,<BR/><BR/>I am not a statistics professional. However, my wife is - being an academic statistician at one of the best universities in Britain. I am sure she would be very happy to give you some hints if you're interested.<BR/><BR/>As a novice (albeit one with two science degrees), I can tell you that there are ways to use statistics to estimate factors in wider populations. That is a major purpose of statistics after all. <BR/><BR/>However, it is fairly clear that your study does not do that for the reasons I've outlined above. <BR/><BR/>For the record, this is not supposed to imply a) that your work is bad b) that your review of the published work is imperfect (actually I was pretty impressed with that aspect of it) c) that your knowledge of the methods in your subject are wrong or even d) that you've made a mistake in calculating your statistics (although as I said, I still suspect you've made an error in calculating the standard variance).<BR/><BR/>If you stand back and think about it, common sense will tell you that you cannot use a self selecting group to make statements about a wider group. <BR/><BR/>Check out your method - use the same test with other key words and see what the standard variation between groups is as you've not allowed for background variation in the method. <BR/><BR/>Finally, I'm not doubting your intelligence or that you've got flair in your subject. But there is a lot of bullshit science about. You've really got to learn to cut through the crap and see if the actual science backs up the conclusions. Have you surveyed a representative sample of the UW population? Is the UW representative of the state as a whole? Beware of making data say more than it says. You've done some good work on attitudes in Psychology 313 which might lead you to thinking how you can do more to study it in the wider university. Don't overstretch yourself otherwise you just end up looking silly. <BR/><BR/>I've been there mate. Better to be honest and just say 'although this study is imperfect, it indicates possible areas of future fruitful research' than to be torn apart by someone who you've overlooked and has been studying the area for their whole life. <BR/><BR/>Oh yes. That is seriously embarrassing.Joehttps://www.blogger.com/profile/02102663397567562979noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-9350188278438032132007-12-10T10:43:00.000-08:002007-12-10T10:43:00.000-08:00Joe, I guess we just disagree. There's no way to...Joe,<BR/><BR/> I guess we just disagree. There's no way to actually be for sure without doing the actual larger study you suggest.<BR/><BR/> However, I think most social scientists would disagree with you. I mean you seem to be arguing that results are never generalizable to a larger community that whose characteristics are in any way different from the subject pool. Another way of putting that is: You seem to be saying that there are very possibly, or even probably, large enough differences between our subject pool (a 3rd year psych class at UW) and the University student body at large as to not make our results generalizable.<BR/><BR/> So I guess my question is: what differences are you hypothesizing? That is, why are 36 3rd and 4th year undergrad psych students so very different from 30,000 undergrad students at the university as to make our results non-generalizable?Benjamin Adyhttps://www.blogger.com/profile/03325520894212279303noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-25326892675582181762007-12-10T10:33:00.000-08:002007-12-10T10:33:00.000-08:00I disagree with your conclusions based on the sign...I disagree with your conclusions based on the significance level. You've not measured the test in the wider UW community so the significance level is not relevant in that respect and cannot be used to make that conclusion. <BR/><BR/>It could be an entirely random effect whereby your group is more/less inclined to believe something than the rest of the student body. Impossible to say without doing more research.<BR/><BR/>I also like wikipedia, but it is not a touch on a peer-reviewed article. Given there are so many journals around, you really shouldn't need to quote anything from it IMO.Joehttps://www.blogger.com/profile/02102663397567562979noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-69761484304122017462007-12-10T10:06:00.000-08:002007-12-10T10:06:00.000-08:00Joe,Thankyou for reading and gently criticizing =)...Joe,<BR/><BR/>Thankyou for reading and gently criticizing =)<BR/> <BR/>You’re right that our sample is small and self selecting. I address that a little in the discussion. However, I think your conclusions about that are a bit too restrictive. My understanding is that in the social sciences, once you have N=30, you’re ok to make slightly broader conclusions. Anyway, I think it’s totally safe to extrapolate at *least* out to the whole UW, since we had significance at the .001 level. Actually it was better than that. They only let you report down to .001, but our difference was significant to .00006. <BR/> You only have to randomize if you’re using a between-subjects design. Since we used a within- subjects design, all our participants experienced both levels of the independent variable, so that does away with extraneous within-each-subject deviation.<BR/> I get cranky at all the academic snootery about not citing from Wikipedia. Not angry at you. But it’s still considered unacceptable, as you point out. IMO, that’s just stupid. I mean the figure I cite from Wikipedia is just a really useful figure created from a totally academically acceptable empirical study from the City University of New York. The guy just took their data and created a kewl map from it so you can *see* that Washington is the most secular state. The people at CUNY should have made such a figure for their study. So I have no problem citing it. 30 years from now, citing Wikipedia will be totally acceptable, so I’m just way ahead of the curve.<BR/> Sorry about the figure. Yes, the difference is, as I said, very highly significant: p = .000057. <BR/><BR/> I’m pretty sure I’m gonna get like 59 out of 60 on this paper (based on the scores on I got on parts of it that we had to turn in earlier)(god, what if my prof is following here, and decides I'm too arrogant, and docks my score?), so it’s definitely up to snuff by UW standards. And UW is one of the premier research institutions in the nation. So I think my conclusions are probably fairly broadly defensible.Benjamin Adyhttps://www.blogger.com/profile/03325520894212279303noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-48803933190436991372007-12-10T05:13:00.000-08:002007-12-10T05:13:00.000-08:00Benjamin,An interesting piece of work, although I ...Benjamin,<BR/><BR/>An interesting piece of work, although I have a couple of problems with the way you have used the statistics.<BR/><BR/>First, your sample is very small and self selecting. I therefore do not think you can really extrapolate any further than your Psychology 331 class. I do not think you can argue it is indicative of the University of Washington or wider without further research.<BR/><BR/>Second, whilst I have no knowledge of the methods you used, I would doubt the standard deviations on such a small sample. <BR/><BR/>Normal statistical methods require randomised samples, and you cannot really claim to have had that.<BR/><BR/>I really wouldn't reference anything from wikipedia. Interesting for background, unsubstantiated in a degree level piece of classwork.<BR/><BR/>Are you suggesting there is a significant difference between the reaction times of 'compatible' and 'incompatible' in figure 2?Joehttps://www.blogger.com/profile/02102663397567562979noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-89874536272219076052007-12-09T19:20:00.000-08:002007-12-09T19:20:00.000-08:00Beth, I'm guessing that people in seattle would...Beth,<BR/> <BR/> I'm guessing that people in seattle would definitely have an explicit bias against fundamentalism. But I don't think that would hold true nationwide.<BR/><BR/> As for implicit feelings about fundamentalism--I guess it would depend on you how defined it. If we're talking christian fundamentalism, I think it could go either way, nationwise, with the implicit bias.Benjamin Adyhttps://www.blogger.com/profile/03325520894212279303noreply@blogger.comtag:blogger.com,1999:blog-24335936.post-61916313408851027982007-12-08T23:10:00.000-08:002007-12-08T23:10:00.000-08:00I really enjoyed reading this- thanks for sharing ...I really enjoyed reading this- thanks for sharing it with us! I don't have much to say about it, really, though I do think atheists are quite courageous. Still, I'm sure I have some implicit bias towards them, explicitly expressed or not. I wonder how the test would work in reverse- like... if you tried "fundamentalism" or "evangelical" in the place of spiritual, or something. I bet the explicit bias against fundamentalism would be much stronger... hmmm... that would be a really fun study to do. Or interesting anyway :) Thanks for giving me something for my brain to chew on.JadeEJFhttps://www.blogger.com/profile/09692414998942899156noreply@blogger.com