I don't doubt that there are some moonbat religious loons who teach creationism in science class across America, but...
"Of more than 900 teachers who responded to a poll conducted by Penn State University political scientist Michael Berkman..."
Sampling bias, anyone?
Let's see, who is most likely to submit a survey dealing with creationism being taught in biology classes... those who feel strongly about it, which would include a cohort of those who... teach creationism in biology classes.
It's such fundamental mistakes in experimental design that make me think that "political scientist" is an oxymoron.
You make a good point, Adam. I should have noted in the original post that the results did not come from a ideal, fully randomized survey.
However, if you follow the link to the original study report given in the Wired article, and scroll down to the section titled "The National Survey of High School Biology Teachers," you'll see that this is not as bad as, say, phone-in "polls" run by TV stations. Note, in particular, that the methodology for dealing with non-respondents is described in an appendix labeled "Text S3" (PDF).
I might also add that there is no real reason to suspect that selection bias would be be all on one side; i.e., I could well imagine a teacher who is strenuously opposed to teaching IDiocy being as moved to respond to the survey as one who favors teaching it.
"I might also add that there is no real reason to suspect that selection bias would be be all on one side; i.e., I could well imagine a teacher who is strenuously opposed to teaching IDiocy being as moved to respond to the survey as one who favors teaching it."
No, it's almost certainly not all on one side. However, it does render the results somewhat unreliable. How do you know which side was most highly motivated to respond? How do you know which direction the sampling bias tilted towards? It's impossible to know, that's the problem with conducting a supposedly scientific survey by sending out surveys and then taking the results of the people who are sufficiently motivated to respond, as opposed to getting responses from a truly random, representative sample of teachers.
Obviously, I think that ID is anti-scientific bullshit, but studies that use poor experimental design to fight it are less helpful than properly designed studies.
3 comments:
I don't doubt that there are some moonbat religious loons who teach creationism in science class across America, but...
"Of more than 900 teachers who responded to a poll conducted by Penn State University political scientist Michael Berkman..."
Sampling bias, anyone?
Let's see, who is most likely to submit a survey dealing with creationism being taught in biology classes... those who feel strongly about it, which would include a cohort of those who... teach creationism in biology classes.
It's such fundamental mistakes in experimental design that make me think that "political scientist" is an oxymoron.
You make a good point, Adam. I should have noted in the original post that the results did not come from a ideal, fully randomized survey.
However, if you follow the link to the original study report given in the Wired article, and scroll down to the section titled "The National Survey of High School Biology Teachers," you'll see that this is not as bad as, say, phone-in "polls" run by TV stations. Note, in particular, that the methodology for dealing with non-respondents is described in an appendix labeled "Text S3" (PDF).
I might also add that there is no real reason to suspect that selection bias would be be all on one side; i.e., I could well imagine a teacher who is strenuously opposed to teaching IDiocy being as moved to respond to the survey as one who favors teaching it.
"I might also add that there is no real reason to suspect that selection bias would be be all on one side; i.e., I could well imagine a teacher who is strenuously opposed to teaching IDiocy being as moved to respond to the survey as one who favors teaching it."
No, it's almost certainly not all on one side. However, it does render the results somewhat unreliable. How do you know which side was most highly motivated to respond? How do you know which direction the sampling bias tilted towards? It's impossible to know, that's the problem with conducting a supposedly scientific survey by sending out surveys and then taking the results of the people who are sufficiently motivated to respond, as opposed to getting responses from a truly random, representative sample of teachers.
Obviously, I think that ID is anti-scientific bullshit, but studies that use poor experimental design to fight it are less helpful than properly designed studies.
Post a Comment