Dashing this off quickly because I was interested in the thread below, but didn’t fancy stringing together my answer over 30 tweets.
No, I’m saying you chose your conclusion then paid for research to come to it, which is a well known method used by people trying to promote something.
— Haunted Towel (@24shaz) July 1, 2020
Because it’s a quick one, please excuse me if my proofreading is even more slapdash than usual.
Two things about me first; I have been a senior manager within the data function of polling & market research companies for over a decade now, and have personally worked on polling in public opinion into trans rights. Secondly, I am on the gender-critical side of this argument, although I have tried to be unbiased here.
If you can’t be bothered to read the thread linked above then it’s between Helen Staniland (@helenstaniland) and an anonymous tweeter, who I’m assuming is called Sharon, based on her Twitter handle (@24shaz). My apologies if that assumption is incorrect. Basically, the sub-thread is about the validity of the results of polling when the party paying for the poll has an interest in getting a particular result.
Obligatory link to Yes, Prime Minister has to be included here.
First off, let me say that Sharon is absolutely right – the way in which questions are asked can have a huge effect on the outcome. The polling company they’re discussing, Populus (who I have never worked for, btw) is a member of the British Polling Council (BPC), which attempts to address this issue by insisting that questions are asking a neutral manner and are free of bias.
This can be very difficult to achieve. I once worked on two polls, running their fieldwork at the same time, both matching the UK general population, in terms of percentages, by sex, age, social class and region. One asked “Should 16 year-olds be given the right to vote in general elections?”, while the other asked, “Should the voting age be lowered to 16?” The former saw 60% approval for 16-year-olds voting, the latter 60% opposed. Neither of those questions seems ridiculously biased or unfair, yet the difference between them is sufficient to make 1 person in 5 change their minds about the answer.
The BPC solution to this is to make all members publish, in full, the questions asked and data tables for all results that are made public by the commissioner of the poll, plus the same for any other questions which are deemed to have a material effect on the result.
This is precisely to get around the issue that Sir Humphrey highlights in the video clip, and means that a poll can’t get away with:
Q1. Which of the following scandals, involving the sleazy and dishonest government, have you heard about (please tick all that apply)
Q2. If a general election was held tomorrow, how would you vote?
…and then only publishing the 2nd question.
If, however, the poll swapped them round then the person who commissioned it would be perfectly within their rights not to publish the scandals question, because the voting question wouldn’t have had an effect on it, and until they make a question’s results public they own the data, not the polling company (this protects the vast amount of money that political parties spend on their own private polling in the run-up to elections).
So far, so good. The way the BPC works is specifically designed to prevent those who commission polls from begging the questions, so you don’t end up with this sort of mess (from a poll by the Trump campaign in 2016)
Where this gets complex is with relation to trans-rights. Polls asking whether trans women should be allowed access to women’s spaces get an overall ‘Yes’, but the poll that’s been questioned in this thread gets a strong ‘No’. What’s the difference?
The most glaring difference is that the No result comes from a question that explicitly states that trans women may be people who were born male, still look male and have had no surgery or drug treatments to alter their male physique, whereas the more positive results come from surveys that simply ask about “trans women” and leaves interpretation of that term to the respondent.
In my professional opinion, both surveys have problems.
If you leave the interpretation of “trans woman” to the respondent then you have no idea if they:
- Know what is meant by that term in the current context, or
- Do know roughly what it means, but assume that it involves some level of trying to “pass”, or
- Are confusing it with older, more established terms, such as transsexual or transvestite, or
- Think that trans women are women who are transitioning to men, which would reverse the question, or
- Have no idea at all what is being asked, and have picked an option at random.
If you think any of these options are a stretch, then you might want to have a look at how confused people got around the common terms heterosexual and homosexual in a survey that Edward Lord was using data from…
There was a thread going on yesterday, asking who was blocked by Edward Lord, which meant I ended up looking at his profile (I’m not blocked, it turns out), and saw this… pic.twitter.com/TecGm3lNuN
— Andrew R (@ExcelPope) May 25, 2019
Meanwhile, the other survey, in presenting only the most extreme definition of trans women is, undoubtedly, biasing the question. There’s an argument to be made that it’s reasonable, because it does factually describe a situation that could occur but for my money it’s a bit, “Given that a baby born in the UK today could grow up to be the next Hitler, would you support making abortion easier?”
The truth is that both surveys suffer from the same problem. To get a genuine answer you need to define terms and provide information to the extent that you can no longer be said to be surveying the general population.
At the moment comparing data from the two polls makes it appear that if the general public fully understood the term trans woman then they would be less supportive of granting them additional rights to access women’s bathrooms and sports, but that brings us up against the iron rule of polling; You only get answers to the questions you’ve asked.
Truly understanding what the general public think would require a much better designed (and probably significantly more expensive) survey. Until we have that, neither side can claim to be correct.
Perhaps the TRAs and the GCs could pool their resources to make it happen.