How do we judge quality when it comes to online research data and the insight generated from it? Depending where you sit, it could be in the fact you’ve actually generated an insight (and we all know some decision is better than no decision don’t we), that the data hangs together, that there are few surprises (and these can be explained if you think about it for a bit), that we achieved the numbers or even that we’ve added trapdoor questions and removed flat liners and speeders.
At Touchstone, we don’t believe these checks are enough. In fact, we know they are not enough. People are rewarded for taking surveys online and this creates an incentive for individuals to game the system: in the USA there are blogs & forums that discuss how to get on panels or river samples and earn cash being a dishonest respondent. Bots have been and are being developed that deliver multiple surveys and therefore multiple incentives to individuals. In the Far East, we have uncovered evidence of ‘respondent factories’ with dozens of surveys being completed by the same individual(s) utilising device cloaking, VPNs and other techniques to get around the typical duplicate respondent checks. Depending on the country and subject matter, levels of cheating can be as high as 20% of survey responses.
At Touchstone, we deploy additional levels of quality control to ensure we are giving our clients data – and insights – we can demonstrate are honest, accurate and reliable. It’s not easy and we have to regularly review and update our processes, but without this integrity our industry is in jeopardy.
Comments