Stated preferences do not align with revealed preferences

From Harridanic
Revision as of 14:41, 13 April 2018 by Paul Herring (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Fancy way of saying that you cannot necessarily believe what people say they think in a survey.

For example, in a survey on giving more money to charity, people are more likely to express the opinion that of course they'd give more to charity if X, Y or Z were to happen. However it is unlikely that everyone expressing that opinion would, in fact, give more money to charity when Y eventually does happen.

Another would be "how much do you drink" or "would you like to drink less" would receive answers the person being questioned thinks would paint them in a better light, rather than what a 'truer' answer would be.

To give a concrete example, Wakefield, Hayes, Durkin, and Borland (2013), has been used by tobacco control as justification that standardised packaging was a success in Australia because people thought about quitting more when smoking cigarettes in plain packaging. Unfortunately no-one thought to follow up these thoughtful people to find out if they actually had given up.

Another form of this is in "studies" that "predict" certain behaviour, despite similar real-life experiences suggesting the opposite - for example when it is suggested that increasing sin-taxes will cut consumption[1], especially among the poor, when actually put into practice, the poor either cut down on other things, or resort even more to illicit sources.


References