Pages

Sunday, May 8, 2016

New site

This Blog has now been migrated over to http://asdfresearch.com.au/blog/. There are additional posts there, and all future posts will occur there.

Thank you for your interest!

Tuesday, January 21, 2014

Tips for analysis

When compiling a written analysis of data, it is important to frame explanations in the correct way. Here are a few tips that may help.
  • If you are referring to an age group, make sure that you word it in such a way so as people can't misinterpret it. For instance, if your age group is 50+ year olds, avoid saying "A high incidence of those aged over 50 years said that they like carrots". Framing it like this could be misinterpreted to mean the statement applies to those aged 51 years or higher (because they are over 50 years). Instead, it should say "A high incidence of those aged 50 years or over said that they like carrots".
  • When talking about the results of a factual yes/no question, for instance, Do you have a health care card?, be careful that you are not making assumptions about attitudes. Saying that health care cards are ‘most likely’ to be held by those aged 50 years or over sounds as though if you asked people aged 50 years or over if they wanted one they would be more inclined to say yes than those under the age of 50. It sounds like it is a choice/attitudinal decision. Instead it should say that a health care card is held by a higher proportion of residents aged 50 years or over.
  • Another common mistake is assuming that if the respondent says they have done something within a certain period, then this is a common action. For instance, a question may ask In the last 3 months, have you seen a doctor because you were turning orange from eating too many carrots?. Often this will be reported as "Half of people aged 50 years or over visit the doctor because they think they are turning orange from eating too many carrots". It can't be assumed that just because it happened in the last 3 months it will be a regular thing. Instead the analysis should be "Half of people aged 50 years or over visited the doctor in the three months prior to interview because they thought they were turning orange from eating too many carrots".
  • If you are comparing two different groups, be careful about how you are framing the analysis. Lets assume that the data has respondents from across two suburbs, A and B. In the sample there are 100 respondents in suburb A and 500 respondents in suburb B. All respondents are asked if they like carrots and it was found that in suburb A 50 per cent said that they like carrots and in suburb B 20 per cent said that they like carrots. The tenancy is to report this as "More people in suburb A like carrots". This is correct proportionally (50 per cent versus 20 per cent) but not numerically (50 people in suburb A versus 100 people in suburb B). In order to report this correctly you would either say "a higher proportion of those in suburb A said that they like carrots" or "a greater number of those in suburb B said that they like carrots".
  • Avoid starting a sentence with a number (unless it is a dot point list). Try to use an expression, such as "Half of respondents..." instead. If you simply must start the sentence with a number, write it as text, not numerals; so "Fifty per cent of respondents" not "50 per cent of respondents".
  • With regards to using "%", "percent" or "per cent" in the written analysis, this will differ across organisations. It is a good idea to get your hands on the style guide for the organisation for whom the report is written to make sure it conforms to their rules.
In addition to these common analysis mistakes, there are a couple of rules that I use when analysing data:
  • In a typical research analysis report (non-academic), after every sentence or paragraph ask "so what?". That is, the analysis should provide meaningful information, not just put forth numbers (unless it is a pure technical report of course).
  • When reviewing a research report, do a search for the word 'interesting'. If you have sentences such as "it is interesting to note..." or "Interestingly..." then delete them. Using 'interesting' to try and find meaning in data is a sure sign that it has no meaning.

Wednesday, October 9, 2013

Changing questions in tracking surveys

This is a very simple one. If the question is wrong, change it. There is no point continuing to collect rubbish data just for the sake of 'tracking'. It will not assist you in any way, in fact it will feed misinformation.

Just make sure that when reporting changes over time, the alteration to the question is noted. Indeed, some question changes will result in not being able to compare to previous findings at all. When analysing this information, I would generally recommend providing the previous tracking information, comment on why the question has changed, and then present the new data with a discussion on how the re-wording has enhanced the analysis.

If you must include the existing incorrect question, find a way to also include a revised question asked in the correct way. However, be sure to consider how ordering and placement of the question will impact on findings.

Monday, September 2, 2013

The cost factor: cutting corners in design to reduce cost

I am going to make my position on this very clear from the outset... Don't do it!!!

This is perhaps one of the worst trends I see happening in research in recent years. As it becomes easier to do research for low cost (Surveymonkey etc) I see lots of organisations running sub-standard research.

Not only does this devalue research as a whole (that is, respondents who receive poorly designed surveys develop cynical views towards participating in research), it results in organisations making decisions based on unsound data. This has dire ramifications both for the future of social research (as a way to provide the community with an avenue to have their say on important issues concerning them) but also on the functioning of businesses who make important decisions based on poor data.

Some of the most common cost-cutting mistakes I see are:
  • Conducting surveys in-house without the expertise to adequately design questionnaires/methodology or analyse findings. This is a particular challenge for the industry today as commissioning a research company to conduct research is often prohibitively expensive, while many organisations are required to undertake research to meet funding / board obligations. Furthermore, research is usually the first item to be reduced or removed to meet budgets, whilst the requirement for delivering evidence of progress remains.
  • Survey Monkey (or similar). I cannot express enough how dangerous Survey Monkey is to the research industry, and for organisations who use it without utilizing any expertise in research design. It has made it incredible easy for anyone to run a survey without the need for any knowledge on how to design questions or indeed even reach a representative target market.
  • Combining two surveys together to reduce administration costs, resulting in prohibitively long surveys (some more than 100 questions!!). This affects response rates (reducing representativeness) and also the accuracy of results in the later questions within the survey (response fatigue).
  • Long barrages of statements to be rated to reduce survey length. In a telephone survey environment, this is both taxing on the interviewer and the respondent; and in a self-completion environment (online or paper based) there is a risk of 'skimming' (that is, people just circling the same option, or random options, for each statement just to complete the question - there are methods to identify and remove people who are doing this, but that is for another post).
  • Using poor respondent sourcing methodology. This is an item for it's own post later, but the two cheapest options at present are using online research panels and random digit dialling (RDD) landlines. Online research panels are self-selected (people choose to join) and are populated with professional respondents (people who conduct lots of surveys, and therefore not necessarily typical of the general population). In Australia, recruiting survey respondents using random digit dial landline numbers, or White Pages listing (including listed mobiles) will not achieve a representative sample. Less than half of adults under the age of 40 years have a landline telephone, and less than 8% of mobile telephones are listed in the White Pages (mostly trades businesses). Unfortunately using mobile phone RDD in Australia is not feasible unless it is a national survey as mobile phone numbers are not assigned by region, and screening for the defined region would result in a very low response rate, and consequently high cost.

Tuesday, August 20, 2013

Survey sampling: Is telephone research no longer viable in Australia?

Conducting research using random digit dial (RDD) landline numbers has for decades been the staple of the research industry. In recent years the effectiveness of this methodology has been in significant decline; first due to the withdrawl of electronic White Pages from public access in 2002, followed by a significant decline in landline installation at home (no longer necessary now that everyone has mobile phones).

The ACMA Communications report for 2011/2012 shows that only 22% of Australian adults who have a fixed line telephone or mobile most use fixed-line at home to communicate, meaning that even when people have a fixed line, it is usually not their primary method of phone communication. Furthermore, the incidence of having access to a fixed line telephone is low amongst younger adults. In June 2011 it was found that only 63% of 18-24 year olds (mostly those still living in their parental home) and 64% of 25-34 year olds claimed to have a fixed line telephone at home. These figures have been falling over the years, so are most likely much lower than this now. [1]

Research conducted by the Social Research Centre reveals that there are statistically significant variations in the populations reached by different telephone sampling methodologies. Specifically, those who were contacted over a mobile phone who didn't have a landline showed a higher incidence of being male, in younger age groups, live in capital cities, born overseas, live in rental accommodation, and living in their neighbourhood for less than five years. [2] In addition, significant biases were observed in the sample contacted over landline, with landline sample showing lower levels of a variety of important variables including health issues, public transport usage and smoking and alcohol consumption. [3]

There are telephone number list providers out there that claim to include mobile numbers by region. These are 'listed' mobile numbers. That is, when someone obtains a new mobile number, it is default unlisted unless the owner requests that it is listed. Many mobile number providers don't actively prompt for people to have it listed. Mobile numbers that are listed are highly likely to be home businesses (as these are the people who go out of their way to get their numbers listed), thereby skewing the 'mobile population' in the survey.

Conclusion

Using Random Digit Dial (RDD) with a mix of mobile and landline numbers would be viable to achieve representative samples. However, this will only work for national surveys, as mobile phone numbers are not assigned by region. Undertaking local area telephone surveys using RDD landlines or white pages phone numbers (even if listed mobiles are included) will miss large, and often critical chunks of the community.

It should be noted, however, that telephone surveys would still be viable if you are sampling a population where you have phone numbers for the entire population (eg. using a client list).

References
[1]  Australian Communication and Media Authority (2012) Communications report 2010–11 series Report 2: Converging communications channels: Preferences and behaviours of Australian communications users, ACMA.
[2] Penney, D (2012), Second national dual-frame omnibus survey announced, www.srcentre.com.au, accessed 21 August 2013.
[3] Penney, D & Vickers, N (2012), Dual Frame Omnibus Survey: Technical and Methodological Summary Report, The Social Research Centre.

Tuesday, June 11, 2013

Agree Disagree Ratings Questions: Do we need to move away from this question type?

Agree Disagree scales are one of the most common types of question found in social research surveys. They are usually used to ascertain the opinions and perceptions of respondents relating to a particular issue. However, research suggests that framing questions in this way results in a notably lower level of quality (read: accuracy) in responses [1].

When referring to Agree Disagree (A/D) scales, the following is how it would typically be framed/presented (obviously this is an example of self completion format, alterations to wording and structure would be expected for telephone surveys). This type of question is sometimes referred to as a Likert scale, after Rensis Likert who developed it in 1932:

Framing a question in this way has a number of limitations that need to be considered:
  • The statements themselves are more often than not leading, such as "I never read the flyers I receive in my letterbox".
  • Acquiescence response bias needs to be considered. This is the phenomenon whereby some people will agree with almost anything; due to being an agreeable person, assuming that the researcher agrees so they defer to their judgement, and/or because agreeing takes less effort than rationalising disagreement. [1].
  • Social desirability bias also needs to be considered, whereby respondents will answer in a way that places them in a more favorable light to others. The risk of this is greater when using direct contact surveying methodologies such as face-to-face or telephone.
  • Some people will shy away from the extreme ends of a rating scale, particularly if it is a sensitive issue, which can result in central tendency bias.
It is instead suggested that one employes an item-specific (IS) response scale structure. For instance, instead of asking for level of agreement with the statement "I never read the flyers I receive in my letterbox" you instead ask "How often you read the flyers you receive in your letterbox?" with a scale such as 'Always, Sometimes, Rarely, Never'. Or you could explore the issue in much more depth, having a series of questions to draw out if there are variations between types of flyers, and ascertain greater detail about actions, such as read and discard, read and share with friends/family, read and pin on fridge/pin-up board etc.

Whilst this approach will clearly provide more useful detail, and avoids the risk of A/D scale biases, it does reduce the opportunity for comparison across multiple statements to identify underlying phenomenon. It also requires greater levels of concentration from the respondent to think through each scale individually. This later consideration, however, can in some cases be a good thing as it will encourage greater levels of engagement with the survey (that is, minimise the risk of the respondent 'zoning out'). Adopting a IS approach can also significantly lengthen survey duration (impacting on respondent satisfaction, response rates and costs). 

Conclusion:
As with the design of any survey question, you need to decide on the best approach based on a wide variety of considerations. For some research, A/D scales may be appropriate, yet for others it may be wise to avoid them.The primary consideration should be how you are going to use the information/what type of information is going to be most useful to you. Cost is also a consideration (presenting a number of statements in a A/D table is cheaper than IS questions), however, cost should never overrule usefulness - if you are going to conduct a survey that is not usefull or is likely to provide sub-standard results just to save money, it is better to not run the survey at all.

If you are going to use Agree Disagree Scales, things to consider are as follows:
  • Always randomise the list of statements to avoid response bias.
  • Keep the list of statements to less than 10 to minimise the risk of response fatigue.
  • The benefit of using a Likert scale is that it allows for identifying of variations which might point to an underlying phenomenon relating to the topic being explored.
References
[1] Saris, W.E et al (2010),  Comparing Questions with Agree/Disagree Response Options to Questions with Construct-Specific Response Options, Survey Research Methods, Vol 4 No1 p61-79.