Opinion

Why we should take some data with a pinch of salt

Last week, a friend sent me a video of a psychiatrist citing a study that claimed 11 per cent of the population in Oman is depressed. My friend wanted to know if this was true and, if so, what might explain it. I responded that, before jumping to conclusions, the video should have provided more details about the study he referenced and its credibility.
Every day, we are bombarded with information designed to persuade, inform, or influence us. A news headline claims a shocking statistic or a social media post shares a “groundbreaking” study. This makes distinguishing truth from manipulation more challenging than ever.
Our brains are wired to accept convincing narratives and impressive-looking numbers without much scrutiny. We live in an age where anyone can present opinions as facts, where correlation gets mistaken for causation, and where a single study can launch a thousand misleading headlines.
In his book, “A Field Guide to Lies', Daniel Levitin discusses how to spot misleading statistics and recognise false experts. He explains that, to obtain reliable data, you need real people to collect the information. Since you can’t interview every single person in the country, you need to decide on a sample. That’s when the problem of sample bias comes in. Say you want to survey a town about people’s attitudes to boycotting certain brands. You head to a shopping mall and interview people across different ages, genders and nationalities. This, however, is still not representative, as you’ve already excluded people who are sick at home, mothers with small children who can’t easily get to the mall, and night workers sleeping during the day. You may think that doing door-to-door surveys will solve this, but if you knock during the day, you miss everyone working in town. Switch to nighttime, and you exclude night-shift workers. Every approach systematically leaves someone out.
Even if you somehow managed to reach a perfect cross-section of people, two more biases undermine your results.
The first is called participation bias, which means that not everyone you ask will agree to participate, and their reasons for declining can affect your data in predictable ways. The volunteers aren’t random; they’re self-selecting based on who cares to engage with your particular topic.
The second is reporting bias, which is the gap between what people actually think and what they’re willing to tell the researcher. Some participants will exaggerate their income to appear more successful. People simply tell you what they think you want to hear.
Unfortunately, every sample includes some form of bias. The question isn’t whether bias exists, but what kind of bias you’re dealing with.
So, when you see survey results, ask yourself, who got left out of this sample? Who chose to participate and why? What might respondents have been reluctant to admit honestly? And when listening to experts, ask if they are presenting the data backed by research or just stating their opinion. If it’s just an opinion, how trustworthy are they?
Finally, when looking at information on a website, use sites that end in .edu, .gov, and .org as they tend to be more neutral compared to commercial websites with obvious agendas.