Is remote work a force for good or evil? Did we find the cure for cancer six times this week? Are UFOs, UAPs, or extraterrestrials finally dropping in to say hey? 

There’s a lot of information out there. If you spend any time online in 2023, you’re navigating a blizzard of information on a daily basis. And every source, from the New York Times to TikTok, wants to be perceived as true, accurate, and bias-free. Unfortunately, that’s not reality. 

Media literacy means understanding how information is generated and disseminated. It’s a survival skill for our information age, helping us see clearly through that blizzard of data. Think of media literacy as a compass, guidebook, and high-viz vest, helping us navigate this complicated media landscape. It’s the only way to make informed opinions that are based on actual, sound evidence. 

As a People Insights researcher at Atlassian, and through my academic background in organizational psychology, I’ve seen firsthand how scientific findings are distorted in the media, purposefully or not. 

Here, we’ll discuss how new knowledge is discovered, shared, and oftentimes misrepresented, and how to evaluate it with a critical eye. 

Today’s knowledge landscape   

What a 13th-century monk can teach us about managing information overload

Most new knowledge is generated through original, primary research. This is data that’s been collected directly from the subject you’re studying, such as through experiments, interviews, or observation, rather than secondhand. 

Qualified, passionate experts are constantly investigating any topic you can think of, from teamwork to Martian bacteria.

Academic research might include topics like: 

Research like this typically first appears in academic journals, which aren’t necessarily accessible to the average person. They’re expensive, tricky to find, and generally not intended for casual reading. 

Instead, most of us learn through media outlets, like TV news networks, print or online newspapers, and niche publications like Wired or Scientific American. These publications have the critically important job of sharing new knowledge with the general public! 

But despite that noble mission, it’s at this secondary stage that an enormous amount of bias, misrepresentation, or just plain inaccuracy happens. 

I’ve even seen my own research into emotional intelligence misrepresented in the media. I found a 17-year decline in certain aspects of emotional intelligence among university students. But certain media outlets used the research to claim that university students were “snowflakes” – most definitely not the same thing.

This doesn’t mean that primary research is flawless, all media is “fake news,” or that you need to sign up for JSTOR. It just means we all must practice fluent media literacy, and never take claims and stories at face value. 

It’s worth mentioning that firsthand reporting – that is, media coverage of world news and current events – counts as research, too. It’s a different discipline with different methodologies, so I won’t cover it in detail here. But many of the strategies we’ll cover can be used to evaluate those kinds of stories, in addition to scientific findings.

Recognizing bias in media outlets

If you feel like you’re seeing wild headlines every day, you’re not alone – and many completely contradict each other. One newspaper might be claiming remote work is the future, while another says it’s the death of creativity. One story says there’s more competition for talent than ever, while another says AI-induced mass unemployment is nigh. 

What gives? The reality is that most research is incredibly nuanced. But nuance doesn’t make for a strong headline. Every media outlet wants to tell a compelling story, because that’s how they stay in business.  

The reasons for this aren’t always malicious. Plenty of content is intentionally biased, whether to get clicks and views or push a political agenda. But even when it’s not, writing and research are influenced by the author’s viewpoint. It’s not possible for any source to be completely objective and neutral. 

Furthermore, a single article – let alone a headline – would rarely have space for all the context and caveats in an original piece of research. What seems like a case of wild and often contradictory stories, is more likely many different types of studies, taking a different approach or angle on the same issue. 

Here are some common ways I see research misrepresented in the media:

  • Findings exaggerated for dramatic effect
  • Confusing correlation and causation (more on those terms below)
  • Isolating and exaggerating one central finding, while removing context and nuance
  • Ignoring other, well-established research that contradicts the finding
  • Cherry-picking data, or only reporting findings that support their viewpoint
  • Misrepresenting the scale of an effect – reporting a small effect as large, or vice versa

If you take one thing away from this article, make it this: No claim and no source is free of bias – including this one! Aim to cross-check every article, broadcast, or social media post with the primary source. Many new “findings” are greatly misrepresented, if not flat-out wrong.

The basics of primary research

So, we’re all trudging through that snowstorm of half-truths and misrepresentations. How can we possibly find our way? 

When we understand the nuts and bolts of different kinds of data, studies, and claims, it’s much easier to have an informed opinion about our modern deluge of information. 

Correlation and causation

Confusing correlation with causation is an incredibly common way scientific findings are misunderstood. 

  • Causation means that one thing directly causes another. Most of us already understand this concept – for example, we know that exercise causes increased physical strength, or that hot temperatures cause burns. 
  • Correlation means that two things occur alongside each other. They are connected in some way, but that doesn’t mean there’s a causal relationship between them. 

It’s easy to confuse causation and correlation. But just because two things are happening at the same time or occurring at the same rate doesn’t mean one caused the other. 

One classic example involves toasters and birth control; there’s an association between someone using birth control, and how many toasters they have in their house. Obviously, buying toasters won’t cause someone to use birth control, or vice versa. A better explanation would be that people with more economic resources are more likely to use birth control – and they can afford more toasters! 

Types of studies 

Not all research is created equal. There are many types of studies and data, designed in different ways and meant to answer different types of questions. 

A self-reported employee survey isn’t the same as a meta-analysis of 10,000 participants – but it’s not trying to be. As long as you understand their strengths and limitations, both have their place in helping you develop an informed opinion.

Meta-analysis

A meta-analysis is a scientific synthesis of many different existing studies on a particular subject. Researchers examine all the available studies and identify what greater conclusions can be drawn from all their findings. 

Meta-analysis is the best way to understand scientific consensus on a topic, identifying outcomes that have occurred over and over again. It will be less useful for new areas of research, where there aren’t many studies to compare. 

Experimental design

An experimental design is a type of study used to determine whether there is a cause-and-effect relationship between two variables.

For example, to test the effects of exercise on stress levels, an experimental study could randomly divide participants into two groups. The experimental group might follow an exercise routine over a specified time period, while the control group would maintain their normal routine.

At the end of the experiment, the researchers would test all the participants’ stress levels. Do the exercisers have lower levels of stress?

Experimental studies can provide evidence for causal relationships (and are the gold standard for doing so!) but the study design or analysis can have flaws, so they don’t provide 100%, airtight proof — especially if it’s just one study.}

Survey data

Survey data is collected by asking participants survey questions. 

A survey about employee experience might involve distributing a questionnaire to thousands of employees, asking about their work history, current role, and workplace experience over time. 

Survey findings don’t prove causation, and there’s inevitable bias in participants’ answers. But they can still provide interesting insights – especially about peoples’ feelings, beliefs, and perceptions. 

Qualitative data 

Qualitative research involves interviewing participants and investigating their perspectives on a particular subject.

To explore how the COVID-19 pandemic affected workers, researchers could interview focus groups of employees on their personal and professional experiences. 

Qualitative data is a rich way to explore the “how” and “why” of a phenomenon, taking context and environment into account. 

But since this type of research is based on testimony from individuals, the results are subjective. Qualitative studies are especially useful for exploratory research, when there’s little to no other data on a subject. Other study structures, like experimental designs, are best when trying to establish causal relationships.

Stop! 5 steps to evaluating claims in the media

You don’t need to be a professional researcher to practice media literacy. Use these 5 steps to vet any story or finding you come across. 

1. Is this claim realistic?

A super bold claim is a dead giveaway that you’re missing nuance, context, or even the basic truth.

Example: In 1986, Newsweek claimed women over 40 were more likely to be killed in a terrorist attack than get married.

2. Is this a reputable source? 

No media source is free of bias, but some are certainly more credible than others. 

Example: For business news, the Wall Street Journal would be more credible than the Daily Mail.

3. Who’s the author? 

Next, check out the individual who wrote or reported on the piece. Do you see any possible conflicts of interest or biases? 

Example: A commercial real estate developer claiming that remote work is bad – while they profit from keeping offices open!

4. What does the primary source tell us? 

What type of study is this, and when was it published? Is the journal peer-reviewed (evaluated by other experts in the field)? Does the abstract even line up with what’s claimed in the article? 

Example: In 2019, multiple news articles claimed that hair dye “may cause” breast cancer. But the original research found merely a correlation – not a particularly strong one, and it wasn’t confirmed by meta-analysis

5. What does the existing research say? 

It’s a red flag if a finding flies in the face of everything else that’s been published. 

Example: A study claiming that exercise is bad for you, or distractions are great for productivity. 

Our modern abundance of information is a good thing. We have access to more knowledge than ever before! But it’s worth developing your media literacy muscle to determine what “new” knowledge is trustworthy, and what calls for further investigation. 

Media literacy: a survival skill for the information age