Pages

Wednesday, October 9, 2013

Changing questions in tracking surveys

This is a very simple one. If the question is wrong, change it. There is no point continuing to collect rubbish data just for the sake of 'tracking'. It will not assist you in any way, in fact it will feed misinformation.

Just make sure that when reporting changes over time, the alteration to the question is noted. Indeed, some question changes will result in not being able to compare to previous findings at all. When analysing this information, I would generally recommend providing the previous tracking information, comment on why the question has changed, and then present the new data with a discussion on how the re-wording has enhanced the analysis.

If you must include the existing incorrect question, find a way to also include a revised question asked in the correct way. However, be sure to consider how ordering and placement of the question will impact on findings.

Monday, September 2, 2013

The cost factor: cutting corners in design to reduce cost

I am going to make my position on this very clear from the outset... Don't do it!!!

This is perhaps one of the worst trends I see happening in research in recent years. As it becomes easier to do research for low cost (Surveymonkey etc) I see lots of organisations running sub-standard research.

Not only does this devalue research as a whole (that is, respondents who receive poorly designed surveys develop cynical views towards participating in research), it results in organisations making decisions based on unsound data. This has dire ramifications both for the future of social research (as a way to provide the community with an avenue to have their say on important issues concerning them) but also on the functioning of businesses who make important decisions based on poor data.

Some of the most common cost-cutting mistakes I see are:
  • Conducting surveys in-house without the expertise to adequately design questionnaires/methodology or analyse findings. This is a particular challenge for the industry today as commissioning a research company to conduct research is often prohibitively expensive, while many organisations are required to undertake research to meet funding / board obligations. Furthermore, research is usually the first item to be reduced or removed to meet budgets, whilst the requirement for delivering evidence of progress remains.
  • Survey Monkey (or similar). I cannot express enough how dangerous Survey Monkey is to the research industry, and for organisations who use it without utilizing any expertise in research design. It has made it incredible easy for anyone to run a survey without the need for any knowledge on how to design questions or indeed even reach a representative target market.
  • Combining two surveys together to reduce administration costs, resulting in prohibitively long surveys (some more than 100 questions!!). This affects response rates (reducing representativeness) and also the accuracy of results in the later questions within the survey (response fatigue).
  • Long barrages of statements to be rated to reduce survey length. In a telephone survey environment, this is both taxing on the interviewer and the respondent; and in a self-completion environment (online or paper based) there is a risk of 'skimming' (that is, people just circling the same option, or random options, for each statement just to complete the question - there are methods to identify and remove people who are doing this, but that is for another post).
  • Using poor respondent sourcing methodology. This is an item for it's own post later, but the two cheapest options at present are using online research panels and random digit dialling (RDD) landlines. Online research panels are self-selected (people choose to join) and are populated with professional respondents (people who conduct lots of surveys, and therefore not necessarily typical of the general population). In Australia, recruiting survey respondents using random digit dial landline numbers, or White Pages listing (including listed mobiles) will not achieve a representative sample. Less than half of adults under the age of 40 years have a landline telephone, and less than 8% of mobile telephones are listed in the White Pages (mostly trades businesses). Unfortunately using mobile phone RDD in Australia is not feasible unless it is a national survey as mobile phone numbers are not assigned by region, and screening for the defined region would result in a very low response rate, and consequently high cost.

Tuesday, August 20, 2013

Survey sampling: Is telephone research no longer viable in Australia?

Conducting research using random digit dial (RDD) landline numbers has for decades been the staple of the research industry. In recent years the effectiveness of this methodology has been in significant decline; first due to the withdrawl of electronic White Pages from public access in 2002, followed by a significant decline in landline installation at home (no longer necessary now that everyone has mobile phones).

The ACMA Communications report for 2011/2012 shows that only 22% of Australian adults who have a fixed line telephone or mobile most use fixed-line at home to communicate, meaning that even when people have a fixed line, it is usually not their primary method of phone communication. Furthermore, the incidence of having access to a fixed line telephone is low amongst younger adults. In June 2011 it was found that only 63% of 18-24 year olds (mostly those still living in their parental home) and 64% of 25-34 year olds claimed to have a fixed line telephone at home. These figures have been falling over the years, so are most likely much lower than this now. [1]

Research conducted by the Social Research Centre reveals that there are statistically significant variations in the populations reached by different telephone sampling methodologies. Specifically, those who were contacted over a mobile phone who didn't have a landline showed a higher incidence of being male, in younger age groups, live in capital cities, born overseas, live in rental accommodation, and living in their neighbourhood for less than five years. [2] In addition, significant biases were observed in the sample contacted over landline, with landline sample showing lower levels of a variety of important variables including health issues, public transport usage and smoking and alcohol consumption. [3]

There are telephone number list providers out there that claim to include mobile numbers by region. These are 'listed' mobile numbers. That is, when someone obtains a new mobile number, it is default unlisted unless the owner requests that it is listed. Many mobile number providers don't actively prompt for people to have it listed. Mobile numbers that are listed are highly likely to be home businesses (as these are the people who go out of their way to get their numbers listed), thereby skewing the 'mobile population' in the survey.

Conclusion

Using Random Digit Dial (RDD) with a mix of mobile and landline numbers would be viable to achieve representative samples. However, this will only work for national surveys, as mobile phone numbers are not assigned by region. Undertaking local area telephone surveys using RDD landlines or white pages phone numbers (even if listed mobiles are included) will miss large, and often critical chunks of the community.

It should be noted, however, that telephone surveys would still be viable if you are sampling a population where you have phone numbers for the entire population (eg. using a client list).

References
[1]  Australian Communication and Media Authority (2012) Communications report 2010–11 series Report 2: Converging communications channels: Preferences and behaviours of Australian communications users, ACMA.
[2] Penney, D (2012), Second national dual-frame omnibus survey announced, www.srcentre.com.au, accessed 21 August 2013.
[3] Penney, D & Vickers, N (2012), Dual Frame Omnibus Survey: Technical and Methodological Summary Report, The Social Research Centre.

Tuesday, June 11, 2013

Agree Disagree Ratings Questions: Do we need to move away from this question type?

Agree Disagree scales are one of the most common types of question found in social research surveys. They are usually used to ascertain the opinions and perceptions of respondents relating to a particular issue. However, research suggests that framing questions in this way results in a notably lower level of quality (read: accuracy) in responses [1].

When referring to Agree Disagree (A/D) scales, the following is how it would typically be framed/presented (obviously this is an example of self completion format, alterations to wording and structure would be expected for telephone surveys). This type of question is sometimes referred to as a Likert scale, after Rensis Likert who developed it in 1932:

Framing a question in this way has a number of limitations that need to be considered:
  • The statements themselves are more often than not leading, such as "I never read the flyers I receive in my letterbox".
  • Acquiescence response bias needs to be considered. This is the phenomenon whereby some people will agree with almost anything; due to being an agreeable person, assuming that the researcher agrees so they defer to their judgement, and/or because agreeing takes less effort than rationalising disagreement. [1].
  • Social desirability bias also needs to be considered, whereby respondents will answer in a way that places them in a more favorable light to others. The risk of this is greater when using direct contact surveying methodologies such as face-to-face or telephone.
  • Some people will shy away from the extreme ends of a rating scale, particularly if it is a sensitive issue, which can result in central tendency bias.
It is instead suggested that one employes an item-specific (IS) response scale structure. For instance, instead of asking for level of agreement with the statement "I never read the flyers I receive in my letterbox" you instead ask "How often you read the flyers you receive in your letterbox?" with a scale such as 'Always, Sometimes, Rarely, Never'. Or you could explore the issue in much more depth, having a series of questions to draw out if there are variations between types of flyers, and ascertain greater detail about actions, such as read and discard, read and share with friends/family, read and pin on fridge/pin-up board etc.

Whilst this approach will clearly provide more useful detail, and avoids the risk of A/D scale biases, it does reduce the opportunity for comparison across multiple statements to identify underlying phenomenon. It also requires greater levels of concentration from the respondent to think through each scale individually. This later consideration, however, can in some cases be a good thing as it will encourage greater levels of engagement with the survey (that is, minimise the risk of the respondent 'zoning out'). Adopting a IS approach can also significantly lengthen survey duration (impacting on respondent satisfaction, response rates and costs). 

Conclusion:
As with the design of any survey question, you need to decide on the best approach based on a wide variety of considerations. For some research, A/D scales may be appropriate, yet for others it may be wise to avoid them.The primary consideration should be how you are going to use the information/what type of information is going to be most useful to you. Cost is also a consideration (presenting a number of statements in a A/D table is cheaper than IS questions), however, cost should never overrule usefulness - if you are going to conduct a survey that is not usefull or is likely to provide sub-standard results just to save money, it is better to not run the survey at all.

If you are going to use Agree Disagree Scales, things to consider are as follows:
  • Always randomise the list of statements to avoid response bias.
  • Keep the list of statements to less than 10 to minimise the risk of response fatigue.
  • The benefit of using a Likert scale is that it allows for identifying of variations which might point to an underlying phenomenon relating to the topic being explored.
References
[1] Saris, W.E et al (2010),  Comparing Questions with Agree/Disagree Response Options to Questions with Construct-Specific Response Options, Survey Research Methods, Vol 4 No1 p61-79.