How to Spot Bad Data: A Case Study in Corporate Research Theater

A survey from financial services firm Empower went viral with a claim that Gen Z thinks they need a $587,797 salary to be "financially successful" - nearly six times what Boomers said they need ($99,874). The story was picked up by Fortune, Entrepreneur, Forbes, regional/local news, and social media, spawning thought-leadership pieces about generational entitlement, economic anxiety, and the death of the American Dream.

There's just one problem, the "research" behind these claims is so methodologically flawed that we can't actually conclude anything meaningful from it.

This isn't post isn’t designed to be a hit piece on Empower or the researchers involved. Instead, we want to use this as a teaching moment - a real-world example of how data gets weaponized, either through incompetence or intent, to create viral content that misleads the public.

Whether Empower acted in good faith but lacked statistical rigor, or deliberately designed their survey to generate shocking headlines (and free press for their brand), the result is the same: Mllions of people now believe something that isn’t supported by the research that Empower published.

Let's break down exactly what's wrong here, and more importantly, how you can spot these red flags in any "research" you encounter.


The Claim

From Empower's press release and website:

Is there a secret to financial success? Most Americans (52%) say ‘yes’ – and the average salary considered successful is $270,000 per year, and $5.3 million in overall net worth.
— Empower

Breaking down by generation:

  • Gen Z (1997-2012): $587,797 salary needed

  • Millennials (1981-1996): $180,865 salary needed

  • Gen X (1965-1980): $212,321 salary needed

  • Boomers (1946-1964): $99,874 salary needed

These are shockingly precise numbers. Gen Z's figure is reported to the exact dollar: $587,797. That kind of precision suggests rigorous statistical methodology, right?

Wrong.


🚩 Red Flag #1: The "Methodology" Section Is Two Sentences Long

Here's the entire methodological disclosure from Empower's study:

"The Empower 'Secret to Success' study is based on online survey responses from 2,203 Americans ages 18+ from September 13-14, 2024. The survey is weighted to be nationally representative of U.S. adults."

That's it. Two sentences. For a study making national headlines and influencing public discourse about generational economics.

What's Missing?

Any credible research study should disclose:

Basic Sample Information:

  • Sample size breakdown by generation (How many Gen Z respondents? 100? 1,000?)

  • Age ranges within each generation

  • Geographic distribution

  • Income distribution of respondents

  • Education levels

  • Employment status

Statistical Methodology:

  • Are the reported figures means, medians, or modes?

  • What are the standard deviations?

  • What are the confidence intervals?

  • How were outliers handled?

  • What's the margin of error?

Survey Design:

  • Complete question wording in context (Empower came close on this but still a lot to be desired)

  • Question order (what came before/after?)

  • Response options provided

  • Whether questions were required or optional

Weighting Details:

  • What variables were used for weighting?

  • What was the original vs. weighted sample composition?

  • How much did individual responses get weighted?

Quality Controls:

  • Non-response rates

  • Completion rates

  • Data cleaning procedures

  • Attention check questions

Empower provides none of this.

Why This Matters

Imagine we told you, "We surveyed 2,203 people and Gen Z needs $587,797 to feel successful."

Your first questions should be:

  • How many Gen Z people did you actually survey?

  • What was the range of answers?

  • Is $587,797 the average or the middle answer?

  • How many people said $1 million+ and skewed the average?

Without this information, the number "$587,797" is meaningless. It could mean:

  • 500 Gen Z respondents all said between $550k-$625k (meaningful consensus)

  • 300 Gen Z respondents: 270 said $100k-$200k, and 30 said $5 million+ (average pulled up by outliers)

  • Anything in between

We. Don't. Know.

The Lesson

When you see research that doesn't disclose basic methodology, assume it can't withstand scrutiny. Legitimate researchers are proud of their methods and transparent about limitations. Lack of transparency is usually a sign that deeper examination would reveal problems.


🚩 Red Flag #2: Mean vs. Median - The Financial Data Cardinal Sin

Empower reports "the average salary considered successful is $270,000."

They use the word "average" but never specify whether this is:

  • Mean (arithmetic average - add all responses, divide by number of respondents)

  • Median (middle value when responses are sorted)

  • Mode (most common response)

This matters enormously for financial data.

Why Means Are Problematic for Financial Data

Financial responses are almost always right-skewed - most people give modest numbers, but a few give extremely high numbers that pull the average up.

Example with 10 Gen Z respondents:

Let's say 10 Gen Z people answered "what salary would make you feel financially successful?":

  • 8 people say: $100k, $120k, $150k, $150k, $175k, $180k, $200k, $200k

  • 2 people say: $2 million, $3 million

The mean: $587,500 (close to Empower's reported Gen Z figure!)
The median: $177,500 (the middle value)

The headline becomes: "Gen Z thinks they need $587,500 to be successful!"

The reality: 80% of respondents said under $200k, but two outliers pulled the average way up.

Estimating Gen Z Sample Size

Based on US demographics, here's the likely breakdown of Empower's 2,203 respondents:

  • Boomers (60-78 years old): ~21% of adults = ~463 respondents

  • Gen X (44-59 years old): ~20% of adults = ~441 respondents

  • Millennials (28-43 years old): ~27% of adults = ~595 respondents

  • Gen Z (18-27 years old, 18+ only): ~15% of adults = ~330 respondents

If Gen Z's sample is around 330 people, it would only take 20-30 people giving very high answers ($2-5 million) to pull the mean from $200k up to $587k, even if the vast majority answered $100k-250k.

Why Legitimate Researchers Report Medians for Financial Data

The median is resistant to outliers. If those same 10 respondents included someone who said "$50 billion" as a joke or misunderstanding, the mean would skyrocket to $5+ billion, but the median would stay at $177,500.

For income, wealth, spending, and other financial variables, medians are the standard. The fact that Empower doesn't even specify which measure they used suggests either:

  1. They don't understand basic statistical principles

  2. They deliberately chose the measure that gave them the most shocking number

Neither option inspires confidence.

The Lesson

When you see financial data reported as "average" without specification, assume they're using the mean, and assume the median is substantially lower. If they were using the median (which would be more appropriate), they would say "median" because it sounds more rigorous.


🚩 Red Flag #3: The Question Design Was Aspirational, Not Realistic

Here's the actual question Empower asked:

"Thinking about an annual salary (e.g., the money you earn at your job per year) and an 'all in' dollar amount (e.g., your overall net worth), how much money would it take for you to be financially successful?"

Notice what they're asking:

  • "How much would it take to be successful" (not "comfortable" or "secure")

  • No reality anchoring (not "compared to what you make now" or "realistically in your field")

  • No time frame (is this entry-level, mid-career, peak earnings, retirement?)

  • Combines salary AND net worth in one question (cognitive overload)

The Psychology Problem

"Successful" is an inherently aspirational word. When you ask someone "what would make you successful," you're asking them to imagine an idealized end state.

Compare these questions:

  1. "What salary would make you feel financially successful?"

  2. "What salary would provide a comfortable lifestyle for you?"

  3. "What salary do you expect to earn at the peak of your career?"

These would generate wildly different answers because they're asking different things.

The Age/Life Stage Confound

Now consider who's answering this question:

Gen Z respondents (average age ~22):

  • Likely thinking: "successful" = made it, CEO, influencer, tech startup founder

  • Reference points: social media success stories, tech worker salaries, celebrity wealth

  • Life stage: aspirational, optimistic, inexperienced with real salaries

  • Haven't yet calibrated expectations to reality

Boomer respondents (average age ~70):

  • Likely thinking: "successful" = comfortable retirement, financially secure

  • Reference points: their own lifetime earnings, what their parents made, their peers

  • Life stage: retrospective, experienced, already know what "enough" feels like

  • Have spent decades calibrating expectations to reality

These aren't measuring the same construct. You can't compare a 22-year-old's aspirational dream number with a 70-year-old's retrospective assessment of sufficiency and call it a "generational difference."

The Social Media Amplification Effect

Gen Z has grown up in an era of:

  • Instagram influencers showing luxury lifestyles

  • Tech workers making $300k+ out of college (rare, but highly visible)

  • Crypto millionaires and startup lottery winners

  • "Get rich" content everywhere

  • Constant exposure to extreme wealth displays

When you ask Gen Z "what salary represents success" with no reality anchoring, they're likely anchoring to what they see on social media, not what's actually achievable or necessary.

Research shows Gen Z suffers from "money dysmorphia" - a distorted view of what normal financial lives look like, shaped by social media's highlight reel.

The Lesson

Question wording dramatically affects responses. Aspirational questions generate aspirational answers. If the question allowed for fantastical thinking without reality checks, the answers will be fantastical. This doesn't tell us what people actually need, expect, or even really want.


🚩 Red Flag #4: No Context Questions

A well-designed survey would include calibration questions:

Reality Checks:

  • "What do you currently earn?"

  • "What salary do you expect to earn in 5 years? 10 years? At peak?"

  • "What salary would provide a 'comfortable' lifestyle?"

  • "What salary is realistic in your field/career?"

Definitional Questions:

  • "What does 'financial success' mean to you?" (open-ended)

  • "Which is more important: high income, wealth, free time, or happiness?"

Comparative Questions:

  • "Are you better off than your parents were at your age?"

  • "Do you expect to be more or less successful than your parents?"

Empower asked none of these. This means:

  • We don't know if respondents' answers were tethered to any reality

  • We can't separate fantasy from expectation

  • We can't contextualize what "success" meant to each person

  • We can't control for current income, education, or career stage

The Lesson

Isolated questions without context produce isolated, context-free answers. One data point tells you almost nothing. Patterns across multiple related questions tell you something meaningful.


🚩 Red Flag #5: The Conflict of Interest

Here's what you need to know about Empower:

  • What they are: A for-profit financial services company managing $1.8 trillion in assets

  • How they make money: Retirement account management, investment advisory fees, financial planning services

  • What they need: New clients, especially young clients with decades of investing ahead

Now consider: Who benefits from people believing they need $587k/year and $5.3 million net worth to be "successful"?

The Anxiety Marketing Model

If you believe you need $587k/year to be successful, and you're currently making $60k, you feel:

  • Anxious about your financial future

  • Inadequate in your current situation

  • Desperate for help figuring out how to close the gap

  • Motivated to seek financial advice

This is the foundation of fear-based marketing, the classic FUD propaganda technique. Create anxiety, position yourself as the solution.

The PR Value Calculation

Let's do some math:

Cost of survey:

  • Morning Consult panel survey: ~$15,000-$25,000

  • Report design and publishing: ~$5,000-$10,000

  • Total cost: ~$20,000-$35,000

Value of media coverage:

  • Stories in Fortune, Forbes, Entrepreneur, Axios, hundreds of local news outlets

  • Millions of social media impressions

  • Empower's name mentioned in every story

  • Equivalent advertising value: $2-5 million+

Return on Investment: 100x+

From a marketing perspective, this survey was extraordinarily successful regardless of whether the data is valid.

The Incentive Problem

When a company with financial interests in the results conducts research on financial attitudes, we should ask:

What result would benefit them most?

  • Boring, modest numbers that suggest people are realistic? NO PRESS

  • Shocking, extreme numbers that suggest people are anxious? VIRAL HEADLINES

What result would hurt them most?

  • "Americans feel financially confident and don't think they need much to be successful"

  • This would not generate press and would not drive people to seek financial services

The incentive structure is clear: Shocking results benefit Empower, boring results don't.

This doesn't mean they deliberately manipulated data (though it's possible), but it means we should be extra skeptical of results that align perfectly with their business interests.

The Lesson

Always ask who benefits from you believing the research findings. When corporate-sponsored research produces results that benefit the sponsor's business model, scrutinize the methodology extra carefully. Legitimate research discloses potential conflicts of interest and uses independent review to mitigate bias.


🚩 Red Flag #6: Who Actually Took This Survey?

The survey was "conducted online" through Morning Consult, a panel survey provider. But who actually takes online surveys?

The Panel Survey Model

Morning Consult maintains panels of people who:

  • Sign up to take surveys

  • Receive survey invitations via email

  • Get paid small amounts ($0.50-$2 per survey)

  • Accumulate points toward gift cards

Selection Bias Problems

Who has time to take surveys for $1-2 each?

  • Students

  • Unemployed or underemployed people

  • Retirees

  • People with very flexible schedules

  • People who need/want supplemental income

Who probably doesn't take these surveys?

  • Busy professionals working 50+ hour weeks

  • High earners with limited free time

  • People focused on their careers

  • Anyone who values their time at more than $10-15/hour

The Gen Z Problem

If Empower's Gen Z sample (~330 people) skews toward:

  • College students not yet in the workforce

  • Unemployed Gen Z scrolling social media

  • People in entry-level jobs with lots of free time

Then we're not getting "Gen Z's" opinion - we're getting "Gen Z people with enough free time to take $2 surveys" opinion.

The successfully employed Gen Z worker making $85k at a tech company? They're probably not spending their evening taking surveys for gift cards.

This could explain why Gen Z's number is so inflated, we may be systematically sampling the subset of Gen Z with the least realistic understanding of actual salaries and career trajectories.

The Lesson

Sample selection matters enormously. Who participates in a study often determines the results more than the questions asked. Always ask: "Who would actually take this survey, and are they representative of the population we're trying to understand?"


🚩 Red Flag #7: The Timing (2 Days in September)

The survey was fielded September 13-14, 2024. That's:

  • 2 consecutive days (Friday-Saturday)

  • 2,203 respondents in 48 hours

  • 1,100+ respondents per day

The Speed Problem

This is fast. Very fast. Which suggests:

  • Limited time for quality control

  • Potential for bots or speeders (people rushing through for payment)

  • No ability to do any follow-up or validation

  • No time to analyze early responses and refine questions

The Weekend Problem

Friday-Saturday sampling introduces bias:

  • Different demographics are available on weekends vs. weekdays

  • People's moods and thinking patterns differ on weekends

  • Financial anxiety might be different when you're not at work

The Snapshot Problem

This is a single two-day snapshot of attitudes. But financial attitudes fluctuate based on:

  • Recent news (stock market crash? housing report?)

  • Personal circumstances (just got paid? just paid rent?)

  • Seasonal patterns (September back-to-school season)

A more rigorous study would:

  • Sample over several weeks to smooth out temporal variation

  • Test different times of day and days of week

  • Possibly repeat at different times of year

The Lesson

Fast, cheap surveys produce fast, cheap data. When you see research conducted in 1-2 days, understand that quality was traded for speed. This isn't always wrong (sometimes speed matters), but it should lower your confidence in the results.


The Knock-On Effect: When Bad Data Breeds Bad Analysis

Once Empower published their report, it spread rapidly. News outlets picked it up (mostly uncritically). Then came the viral LinkedIn posts.

One example that making the rounds on LinkedIn constructed an entire moral narrative around the data:

"Gen Z needs $587k to feel financially successful. Boomers need just $99k. Each thinks the other delusional...The gulf between these interpretations is pure arithmetic."

The post went on to:

  • Quote James Truslow Adams on the American Dream

  • List housing costs (claiming 8x income vs. 2.8x for Boomers)

  • Claim college cost "one summer's wages" for Boomers

  • Conclude this represents "history's greatest moral delusion"

  • Frame it as "a species pricing its young out of their own future"

This is what happens when weak data meets emotional rhetoric.

The poster:

  • Took Empower's numbers at face value (no methodology critique)

  • Added cherry-picked historical comparisons (some accurate, some exaggerated)

  • Built a causal narrative (system failure) from correlational data (survey responses)

  • Used emotional language to generate engagement

  • Presented opinion as "arithmetic"

And it worked!

But it's all built on a foundation of questionable data that doesn't actually support the claims being made.

This is the danger: Bad data doesn't stay contained. It gets amplified, emotionalized, and weaponized to support whatever narrative the sharer wants to push.


What Would Good Research Look Like?

Let's imagine Empower actually wanted to understand generational differences in financial expectations. Here's what they should have done:

Better Sample Design

Size and Composition:

  • Larger sample: 5,000+ respondents minimum

  • Clear sample size targets by generation (500+ per generation minimum)

  • Recruit from multiple sources (not just online panels)

  • Include income/employment quotas to ensure working professionals are represented

  • Field over 2-4 weeks to reduce temporal bias

Quality Controls:

  • Attention check questions

  • Speedster detection and removal

  • Open-ended questions to verify understanding

  • Follow-up interviews with subset of respondents

Better Question Design

Multiple Related Questions:

  1. "What annual salary would represent financial SUCCESS to you?" (aspirational)

  2. "What annual salary would provide a COMFORTABLE life for you?" (realistic)

  3. "What annual salary do you EXPECT to earn at the peak of your career?" (prediction)

  4. "What do you currently earn?" (baseline)

  5. "What salary is realistic in your field?" (calibration)

Open-Ended Questions:

  • "What does 'financial success' mean to you?"

  • "What barriers, if any, prevent you from reaching your financial goals?"

  • "Describe your parents' financial situation compared to yours."

Life Stage Context:

  • "Are you currently: employed full-time, part-time, student, unemployed, retired?"

  • "How many years have you been in the workforce?"

  • "What field/industry do you work in?"

Better Statistical Analysis

Report:

  • Both means AND medians (with clear labels)

  • Standard deviations

  • Confidence intervals

  • Sample sizes for each subgroup

  • Distribution visualizations (histograms showing the range of responses)

Test:

  • Are differences statistically significant?

  • What happens when you control for current income, education, location?

  • Are there outliers driving the results?

  • How do responses change across the age spectrum within each generation?

Compare:

  • Current Gen Z expectations vs. historical Millennial expectations at same age (longitudinal data)

  • Responses by employment status

  • Responses by income level

  • US vs. international comparisons

Full Transparency

Publish:

  • Complete methodology document

  • All question wording in order

  • Weighting procedures in detail

  • Non-response rates

  • Data cleaning decisions

  • Ideally, anonymized raw data for other researchers to analyze

Independent Review

Include:

  • Peer review by academic researchers

  • Advisory board without financial conflicts

  • Pre-registration of methodology before data collection

  • Third-party analysis verification


The Bottom Line: Hanlon's Razor

"Never attribute to malice that which is adequately explained by incompetence."

We don't know whether Empower:

  • Acted in good faith but lacks statistical expertise (designed a flawed study without realizing it)

  • Prioritized PR over rigor (knew limitations but published anyway for press coverage)

  • Deliberately manipulated methodology to generate shocking numbers (bad faith)

But honestly, it doesn't matter which. The result is the same:

Millions of people now believe something that isn't supported by their research.

They believe Gen Z needs $587k to feel successful. They believe this represents a generational crisis. They believe Boomers are delusional for thinking $99k is enough.

But what their data actually shows is: When you ask a small, non-representative sample of people who take online surveys for money an aspirational question with no reality anchoring, and then report the mean instead of median without showing distributions or controlling for any variables, you get numbers that generate headlines but tell you almost nothing meaningful about what people actually think, need, or expect.

That's less catchy. But it's true.


Final Thoughts

The Empower survey is a perfect case study in how bad data propagates:

  1. Company designs flawed survey (whether intentionally or incompetently)

  2. Results generate shocking headlines (exactly as intended/hoped)

  3. Media reports uncritically (because shocking + data = clicks)

  4. Public accepts at face value (because "research says" sounds authoritative)

  5. Bad data breeds bad analysis (LinkedIn posts, think pieces, policy proposals)

  6. Original flaws are forgotten (narrative becomes "truth")

This happens constantly in our data-saturated world. Most "studies" you see in headlines have similar or worse methodological problems.

The solution isn't cynicism or rejecting all data. It's critical thinking:

  • Demand transparency

  • Ask uncomfortable questions

  • Consider who benefits

  • Look for alternative explanations

  • Understand basic statistics

  • Don't accept conclusions that outrun the evidence

And when you see research that doesn't meet basic standards - whether from Empower, a university, or anyone else - call it out.

Not to be mean. Not to attack people. But because bad data leads to bad decisions, and we all deserve better.

jason thompson

Jason Thompson is the CEO and co-founder of 33 Sticks, a boutique analytics company focused on helping businesses make human-centered decisions through data. He regularly speaks on topics related to data literacy and ethical analytics practices and is the co-author of the analytics children’s book ‘A is for Analytics’

https://www.hippieceolife.com/
Previous
Previous

The Map That Never Dies: Why This Misleading Electoral Visualization Still Works in 2025

Next
Next

How to Prioritize A/B Tests: A Simple Framework for Choosing What to Run First