How to spot when someone cherry-picks data to tell you what they want you to believe.

This side-by-side chart has gone viral as definitive proof that we are not in an AI bubble.

We seen it used on several Subreddits, it made rounds on Facebook, and of course it has been shared ad nauseam by LinkedIn thought-leaders. But do the charts, as one poster on LinkedIn put it, make it obvious that we are not in a bubble?

The charts seems legit, although they have been so overshared at this point the pixel quality has clearly degraded. The numbers seem authoritative. The argument feels data-driven.

But is it obvious?

Let's break down exactly what's going on here, and more importantly, how you can spot similar red flags in any analysis you encounter.

The claim.

“If this were a bubble, we'd see massive cash burn and unsustainable growth. But the numbers show real adoption, real revenue, and real productivity. What matters is net profit growth, something that frankly wasn't there at all in the dot com bubble. This is what separates AI from past 'bubbles'."

The data points used.

  • NVIDIA: "$130B+ in revenue (nearly 2× since 2019)"

  • NVIDIA: "~75% gross margins"

  • NVIDIA: "50% free cash flow"

  • US labor productivity: "rose 3.3% in Q2 2025"

  • Enterprise adoption: "88% of enterprises now use AI in at least one function"

  • Goldman Sachs: "projects a 15%+ productivity boost from generative AI"

  • AI funding: "> $120B in AI funding last quarter"

  • AI unicorns: "Nearly 500 AI unicorns" with "Total valuation: $2.7T+"

Let’s go….


Red Flag #1: The Visual Comparison Is Fundamentally Misleading

Cisco's 2000 price-to-earnings relationship labeled "This is a bubble!" and NVIDIA's 2025 relationship labeled "This is not a bubble!"

The implication is clear, Cisco's price decoupled from fundamentals (bad), while NVIDIA's price tracks fundamentals (good).

What's Wrong With This

Problem 1: The Charts Use Different Timeframes and Scales

The Cisco chart shows 1998-2002 (a 4-year period around the bubble peak and crash). The NVIDIA chart shows 2020-2025 (5 years of growth). These aren't comparable periods as one includes the full boom-bust cycle, the other only shows the boom.

To make an apples-to-apples comparison, you'd need to show:

  • Cisco from 1996-2000 (the boom only)

  • NVIDIA from 2020-2025 (also just the boom)

Or alternatively:

  • Cisco from 1998-2002 (boom + bust)

  • NVIDIA from...we don't know yet, because the bust (if there in fact is one) hasn't happened

The visual tricks your brain into thinking we're looking at complete comparable cycles, when we're actually comparing a complete story (Cisco) to a story still in progress (NVIDIA).

Problem 2: Cisco DID Have Strong Earnings Growth During Its Run-Up

Here's what the post doesn't tell you, Cisco's earnings were growing explosively during its rise too.

From the research we’ve done:

  • Cisco's revenue grew from $12.5B in 1998 to $19B in 2000

  • Cisco's net income was $2.7B in 1999 and growing

  • The company had strong fundamentals throughout the boom

The decoupling shown in the chart happened at the very END of the bubble, not throughout it. For most of Cisco's run-up, earnings and price moved together, just like NVIDIA today.

The visual makes it look like Cisco was always overvalued, when actually investors watched strong fundamentals right up until they didn't.

Problem 3: Both Companies Traded at Extreme Valuations

What the analysis omits:

  • Cisco traded at 220x earnings at its peak (2000)

  • NVIDIA currently trades at 118x earnings (as of late 2023 data)

NVIDIA's multiple is lower, yes. But it's still historically extreme. Is 118x "reasonable"? That's a judgment call, but it's certainly not conservative. For context:

  • The S&P 500's long-term average P/E is around 15-20x

  • Even high-growth tech companies typically trade at 30-50x

  • NVIDIA is trading at roughly 6x the market average, or 2-4x typical high-growth valuations

The chart makes you think NVIDIA's valuation is fundamentally different, when it's really just "less insane than Cisco's absolute peak."

The Lesson

Visual comparisons are powerful precisely because they bypass critical thinking. When you see side-by-side charts making opposite conclusions, ask:

  • Are the timeframes truly comparable?

  • Are the scales consistent?

  • What data is being omitted?

  • Is one chart showing a complete cycle while the other is mid-cycle?

A good comparison shows complete, comparable time periods with transparent methodology. This comparison does neither.


Red Flag #2: The NVIDIA Revenue Figure Is Accurate But Misleading

"In 2025 alone, NVIDIA did $130B+ in revenue (nearly 2x since 2019)

This number is real. NVIDIA's fiscal 2025 revenue was $130.5B, which is indeed roughly double 2019 levels.

What's Missing

Context 1: This Is Fiscal Year Data, Not Calendar Year

NVIDIA's fiscal 2025 ended in January 2025. So "$130B+ in revenue" actually spans February 2024-January 2025, not "2025 alone" as the analysis suggests. This isn't necessarily dishonest, but it's imprecise in a way that makes recent growth sound more dramatic.

Context 2: The Revenue Surge Is Extremely Recent

Here's NVIDIA's revenue trajectory:

  • FY2019: ~$11B

  • FY2020: ~$11B

  • FY2021: ~$17B

  • FY2022: ~$27B

  • FY2023: ~$27B

  • FY2024: ~$61B

  • FY2025: ~$131B

The massive jump happened in just two years. From FY2023 to FY2025, revenue increased 4.8x.

This is...not normal. Even for a revolutionary technology. For comparison:

  • Apple's iPhone revenue grew rapidly but never quintupled in two years

  • Amazon's AWS grew fast but took a decade to reach comparable scale

  • Even Cisco's fastest growth period saw revenue roughly double over two years, not quintuple

Context 3: The Growth Is Driven by Capital Expenditure, Not End-User Adoption

NVIDIA's customers are primarily:

  • Microsoft

  • Amazon

  • Google

  • Meta

  • Oracle

These five "hyperscalers" account for the vast majority of NVIDIA's data center revenue. And all five are in a massive capital expenditure race, collectively spending over $200B annually on AI infrastructure.

But here's the key question the analysis doesn't address, Are these companies buying chips because they have proven AI businesses generating revenue and profit, or because they're afraid of being left behind?

The answer appears to be largely the latter. As of November 2025:

  • None of the hyperscalers has broken out profitable AI revenue at scale

  • Most AI products are still being offered at a loss or break-even

  • The business model for AI is still being figured out

NVIDIA's revenue growth is real, but it's being driven by other companies' capital expenditures, which are speculative bets on future returns. This is fundamentally different from revenue driven by proven end-user demand.

Context 4: This Exact Pattern Happened with Cisco

Remember what happened in the dot-com era? Telecom companies and ISPs spent hundreds of billions building internet infrastructure, buying equipment from companies like Cisco. Cisco's revenue surged. Then the telecom companies realized they'd overbuilt capacity, cut spending dramatically, and Cisco's revenue growth stopped.

Does this pattern sound familiar?

The Lesson

Real revenue doesn't automatically mean sustainable revenue. When evaluating a company's growth, ask:

  • Is this revenue from end users or from other companies' CapEx?

  • Are customers buying because they have a proven business case or because of FOMO?

  • Is the growth rate sustainable or explosive-then-questionable?

  • Have comparable growth rates in the past been followed by corrections?

Revenue is a lagging indicator. It tells you what happened, not what will continue to happen.


Red Flag #3: Cherry-Picking Metrics That Support the Narrative

The analysis highlights NVIDIA's "~75% gross margins" and "50% free cash flow" as evidence of health.

These numbers are accurate. They're also...carefully selected.

What's Being Emphasized

Gross margin: The percentage of revenue left after subtracting direct costs of goods sold. NVIDIA's 73-75% gross margins are indeed impressive and indicate strong pricing power.

Free cash flow: NVIDIA does generate substantial cash flow, which is a sign of operational health.

These are legitimate positive indicators.

What's Being Omitted

Valuation multiples: As mentioned, NVIDIA trades at 118x earnings. Even with strong margins and cash flow, you can dramatically overpay for a good business.

Historical comparison: Cisco also had strong margins during its run-up. From the research we did, Cisco maintained solid profitability throughout the boom. Strong operational metrics didn't prevent a 89% price collapse.

Customer concentration risk: The analysis doesn't mention that NVIDIA's growth is dependent on a handful of customers continuing massive spending programs. If hyperscalers slow their CapEx, NVIDIA's growth collapses.

Competition: AMD, Intel, and custom chips from Google, Amazon, and others are all competing for AI chip share. NVIDIA's dominance isn't guaranteed forever.

Cyclicality: Semiconductor markets are notoriously cyclical. NVIDIA's current boom could be followed by a multi-year downturn, as has happened repeatedly in chip history.

The Pattern

The analysis cherry-picks metrics that support the "everything is great" narrative while ignoring metrics that would complicate the story.

This is a classic persuasion technique of flooding the audience with real-but-selective data, creating the impression of comprehensive analysis while actually presenting a one-sided case.

The Lesson

When someone presents a data-rich argument, look for what's missing. Ask:

  • What contrary metrics would I want to see?

  • Are there standard metrics for this type of analysis that aren't being shown?

  • Is this a balanced assessment or a case for one side?

  • Would an equally knowledgeable person emphasizing different metrics reach the opposite conclusion?

Incomplete data isn't wrong, it's just incomplete. And incomplete analysis leads to overconfident conclusions.


Red Flag #4: The Productivity Claim Is Real But Lacks Crucial Context

The analysis states that, "U.S. labor productivity rose 3.3% in Q2 2025."

This is accurate. According to the Bureau of Labor Statistics, nonfarm business sector labor productivity did increase 3.3% in Q2 2025 (annualized rate).

What's Missing

Context 1: This Is One Quarter of Volatile Data

Labor productivity bounces around quarter to quarter. Looking at recent trends:

  • Q1 2025: -1.5% (productivity decreased)

  • Q2 2025: +3.3% (productivity increased)

So in the span of six months, we went from the first productivity decline since Q2 2022 to a strong gain. This kind of volatility is normal and doesn't indicate a trend.

Context 2: The Increase Isn't Attributed to AI

The post implies the productivity gain is evidence of AI's impact. But:

  1. The BLS report doesn't attribute the gain to any specific cause

  2. AI adoption in Q2 2025 was still relatively early (Census data shows ~9.7% of firms using AI as of August 2025)

  3. Productivity gains from new technologies typically take years to show up in aggregate statistics

Historical lesson here. Electricity, computers, and the internet all took 20+ years after initial deployment to show up meaningfully in productivity statistics. The idea that AI is already boosting national productivity after ~2 years of widespread awareness is...optimistic at best.

Context 3: Year-Over-Year Gain Is More Modest

The 3.3% figure is the quarterly annualized rate (meaning if this quarter's pace continued for four quarters, annual productivity would rise 3.3%). But the year-over-year figure is more modest, productivity increased 1.5% from Q2 2024 to Q2 2025.

1.5% annual productivity growth is fine. It's decent. But it's not revolutionary. The long-term historical average since 1947 is around 2.1%, and the current business cycle average is 1.8%.

So the "proof" of AI's transformative impact is productivity growth that's slightly below the historical average?

Context 4: The Strongest Quarter Was Driven by Non-AI Factors

According to the BLS data, the Q2 2025 productivity gains were driven by:

  • A 4.4% increase in output

  • A 1.1% increase in hours worked

What drove the output increase? Primarily:

  • Manufacturing productivity gains (2.5%)

  • A decrease in imports (which mathematically increases GDP)

  • Increased consumer spending

None of the BLS analysis attributes gains to AI specifically. This is standard economic expansion productivity, not evidence of a technological revolution.

The Lesson

A single data point can be made to support almost any narrative depending on the context provided. When someone cites economic statistics, ask:

  • Is this one data point or a trend?

  • How volatile is this metric historically?

  • What's causing the change (if known)?

  • Are we looking at annualized, quarterly, or year-over-year figures?

  • How does this compare to historical averages?

Economic data is noisy. One good quarter doesn't prove anything.


Red Flag #5: The "88% Enterprise Adoption" Claim Is Likely False

The analysis claims that, "88% of enterprises now use AI in at least one function."

This number appears nowhere in official statistics. Let us show you what the actual data says.

What the Real Data Shows

US Census Bureau (August 2025): 9.7% of US firms use AI in production

McKinsey (2025): 72-78% of organizations use AI in some capacity

Anthropic (2025): ~10% adoption in regular business use

Stanford AI Index (2025): 78% of organizations used AI in 2024

So where does "88%" come from? The analysis doesn't cite a source, but it appears to be either:

  1. A misremembering or exaggeration of the ~78% figure

  2. A reference to a specific industry or company-size segment (not "enterprises" generally)

  3. Made up

Why This Matters

There's a massive difference between:

  • 9.7% use AI in production (Census Bureau's definition: actually using AI in business processes)

  • 78% have experimented with or used AI in some form (McKinsey/Stanford's definition: includes pilots, one-off tests, individual employees using ChatGPT)

  • 88% claim (appears to be unsourced)

Let us give you an analogy. If we say:

  • "88% of people have tried sushi" - this might be true in some demographics

  • "88% of people regularly eat sushi" - definitely false

  • "88% of people's diets are primarily sushi" - absurd

The analysis is making a claim equivalent to the third statement while using language that makes it sound like the first. There's a world of difference between "tried AI once" and "uses AI in at least one business function."

What Adoption Actually Looks Like

The real picture from multiple sources:

  • ~10% of firms use AI regularly in production

  • Adoption is heavily concentrated in tech, finance, and professional services

  • Most firms experimenting with AI have 1-2 pilots, not scaled deployments

  • The gap between experimentation and production use is enormous

This is still rapid growth! AI adoption has more than doubled in two years. But we're in early innings, not late innings as the analysis implies.

The Lesson

Unsourced statistics should be treated as opinion, not fact. When someone makes a numerical claim, ask:

  • Where does this number come from?

  • How is the key term defined? (what counts as "use"?)

  • Is there a more authoritative source with a different number?

  • Could this be a misremembering or exaggeration of real data?

If you can't verify a statistic, assume it's wrong until proven otherwise.


Red Flag #6: The Goldman Sachs Projection Is Real But Highly Uncertain

The analysis states, "Goldman Sachs projects a 15%+ productivity boost from generative AI."

This is accurate but presented without the caveats Goldman Sachs themselves included.

What Goldman Actually Said

Goldman Sachs economist Joseph Briggs did project that generative AI could boost US labor productivity by ~15% over a 10-year period if widely adopted.

But here's what the analysis omits:

Caveat 1: "If Widely Adopted"

Goldman's projection assumes widespread adoption. From their research:

  • Current adoption: ~10% of firms

  • Projected adoption for productivity gains: substantially higher

  • Timeline: gains expected to materialize from 2027 onward, peaking in early 2030s

So the 15% figure requires:

  • Continued development of AI capabilities

  • Broad adoption across industries

  • Successful integration into workflows

  • Sustained investment

  • All of which are uncertain

Caveat 2: "Over 10 Years"

15% productivity growth over 10 years equals ~1.4% annual productivity growth, which would be good but not miraculous. The analysis makes it sound like a massive leap, when it's actually describing a meaningful but moderate improvement IF everything goes right.


Caveat 3: Goldman Is Not Incorporating This Into Their Baseline Forecasts

From Goldman's own research, "uncertainty around both [AI capabilities and adoption] is sufficiently high that we are not incorporating our findings into our baseline economic forecasts at this time."

In other words, Goldman thinks AI COULD boost productivity significantly, but they're not confident enough to bet their official forecasts on it.

Caveat 4: Early Case Studies Are Not Proof of Economy-Wide Gains

The analysis and Goldman cite case studies showing 25-30% productivity boosts in specific applications. But:

  • These are best-case scenarios with motivated early adopters

  • Selection bias of companies that see gains report them, those that don't, don't

  • Individual gains don't always scale to economy-wide gains (see: the productivity paradox)

  • Early studies consistently overestimate long-term impacts of new technologies

Historical Comparison

We've been here before. In the 1980s and 1990s, economists predicted massive productivity gains from computers. The gains took 15-20 years longer than expected to materialize, and were smaller than initially projected.

Nobel economist Robert Solow famously quipped in 1987, "You can see the computer age everywhere but in the productivity statistics." It took until the late 1990s for computer-driven productivity gains to clearly appear in the data.

AI could be different. Or it could follow the same pattern.

The Lesson

Projections are not facts, especially when the projecting entity explicitly says they're not confident enough to include them in forecasts. When someone cites expert projections, ask:

  • What assumptions underlie the projection?

  • How confident is the source?

  • What's the timeframe?

  • Have similar projections for previous technologies proven accurate?

  • Is this a best-case, worst-case, or expected-case scenario?

Use projections to inform thinking, not to prove conclusions.


Red Flag #7: The Funding Numbers Are Real But Don't Prove What You Think

The analysis claims, "> $120B in AI funding last quarter" and "Nearly 500 AI unicorns" with "Total valuation: $2.7T+."

These numbers are roughly accurate (Q3 2025 global VC funding was $120.7B total, with AI dominating). But what does this actually tell us?

What High Funding Levels Actually Indicate

Here's what investment levels tell you:

  • ✅ Investors believe AI has potential

  • ✅ Capital is flowing into the sector

  • ✅ Competition for AI talent and resources is intense

Here's what investment levels DON'T tell you:

  • ❌ Whether the investments will generate returns

  • ❌ Whether current valuations are justified

  • ❌ Whether we're in a bubble

The Historical Problem

High investment levels are a feature of bubbles, not evidence against them.

During the dot-com boom:

  • VC investment peaked at ~$100B annually (adjusted for inflation: ~$180B in today's dollars)

  • Hundreds of internet companies achieved billion-dollar valuations

  • Investors were absolutely convinced the internet was transformative (they were right!)

  • And then 90% of the companies went bankrupt and investors lost trillions

The issue wasn't that the internet wasn't important. The issue was that:

  1. Too much capital chased too few viable business models

  2. Valuations got disconnected from realistic cash flows

  3. Many companies were building for a future that would take 10-20 years to arrive

  4. Even good ideas failed because they were too early

High investment + transformative technology + excited investors ≠ sustainable valuations

In fact, peak investment often marks the top of bubbles, not their midpoint. When everyone is piling in, when funding is unlimited, when valuations are soaring—that's often when you're closest to the top.

The Unicorn Problem

The analysis mentions, "Nearly 500 AI unicorns" with "$2.7T+ total valuation."

Let's do some math. $2.7T / 500 companies = $5.4B average valuation per unicorn.

Questions this raises:

  • How many of these companies have sustainable business models?

  • How many are profitable?

  • How many will exist in 5 years?

  • How many will justify their current valuations?

During the dot-com boom, there were also hundreds of unicorns. Most no longer exist. Being highly valued doesn't mean being correctly valued.

Recent Concerning Signals

What the analysis omits:

  • Several AI companies are burning cash at extraordinary rates

  • OpenAI is reportedly losing $5B annually despite $3.5B in revenue

  • Many AI companies have business models that assume future capabilities and adoption

  • Multiple prominent investors and analysts (including Goldman Sachs analysts) have recently questioned whether AI valuations have run ahead of fundamentals

From a recent Goldman Sachs equity research report (November 2025): The market has already priced in approximately $19 trillion of AI value creation, which is at "the upper limit of projected macro benefits" and "well ahead of the macro impact."

So even Goldman Sachs, whose economist projections the analysis cites, has equity analysts warning that the market may be overheated.

The Lesson

Capital enthusiasm is not proof of value. In fact, peak enthusiasm often coincides with peak risk. When evaluating funding levels, ask:

  • How does current investment compare to historical bubbles?

  • Are investors discriminating between good and bad opportunities, or funding indiscriminately?

  • What percentage of funded companies are actually profitable?

  • Are valuations based on current cash flows or speculative future scenarios?

High funding levels tell you about sentiment, not about outcomes.


Red Flag #8: The "Net Profit Growth" Argument Misunderstands the Dot-Com Bubble

The analysis states, "What matters is net profit growth, something that frankly wasn't there at all in the dot com bubble."

This is historically inaccurate and reveals a fundamental misunderstanding of what made the dot-com bubble a bubble.

What Actually Happened in the Dot-Com Era

Many successful companies had strong profit growth during the bubble:

Cisco:

  • 1999: $2.7B net income

  • 2000: Peak valuation with continued growth

  • The company was profitable and growing throughout the boom

Microsoft:

  • Highly profitable throughout the entire period

  • Still collapsed 50% from peak

Intel:

  • Massively profitable

  • Still lost 80% from peak valuations

Oracle, Sun Microsystems, EMC:

  • All profitable, all crashed 70-90%

The issue wasn't that companies lacked profits. The issue was that valuations got disconnected from realistic future cash flows.

The Real Lesson from the Dot-Com Era

The dot-com bubble wasn't "profitable companies good, unprofitable bad." It was "valuations that require perfect execution for 20 years are risky regardless of current profitability."

Many profitable companies crashed because:

  • They traded at 100-200x earnings

  • Growth rates couldn't be sustained

  • Competition emerged

  • Customers cut back spending

  • Business models evolved differently than expected

Sound familiar?

Current AI Market Characteristics

Let's look at NVIDIA specifically:

  • Trading at 118x earnings

  • Customers are other companies making massive speculative bets

  • Growth rate is unsustainable (you can't quintuple revenue every two years forever)

  • Increasing competition from AMD, Intel, and custom chips

  • Dependent on customers continuing to spend massively despite unclear ROI

NVIDIA absolutely has "net profit growth." It's also trading at valuations that require perfect execution for years to justify current prices.

These aren't mutually exclusive conditions. You can have a profitable, growing company that's also overvalued.

The Lesson

Profitability doesn't prevent bubbles; overvaluation causes bubbles. A company can be:

  • ✅ Real

  • ✅ Revolutionary

  • ✅ Profitable

  • ✅ Growing

  • AND simultaneously ❌ Overvalued

Cisco was all of those things in 2000. It still crashed 89%. And it took 24 years to get back to its 2000 highs despite quadrupling its earnings.


Red Flag #9: The "Speculative Pre-Product Bets Are Shrinking" Claim Contradicts the Data

The analysis states, "Speculative pre-product bets are shrinking. Growth-stage, revenue-driven AI is booming."

This is directly contradicted by the actual funding data from the same quarter the post cites.

What the Data Actually Shows

According to KPMG's Q3 2025 VC report:

  • GenAI accounted for 60% of AI funding in 2025

  • 90% of GenAI funding went to mega-rounds of $250M+

  • The largest deals were for foundational model development:

    • Anthropic: $13B

    • xAI: $10B

    • Reflection AI: $1B

    • Cohere: $600M

    • MiniMax AI: $300M

These are exactly the kinds of "speculative pre-product bets" the post claims are shrinking.

OpenAI, Anthropic, and xAI are still losing billions annually. They're being funded based on belief in future capabilities and future business models, not on current profitability.

Meanwhile, many AI application companies (the "revenue-driven" category) are struggling to raise follow-on rounds because they can't demonstrate clear paths to profitability.

The Real Shift

What's actually happening:

  • Massive capital is flowing to a few foundation model companies (the most speculative bets)

  • Smaller capital is available for AI applications (where ROI needs to be clearer)

  • Traditional tech companies are struggling to raise (capital is being redirected to AI)

This is not "speculative bets shrinking" it's "all capital concentrating on the most speculative sector."

The Lesson

Check claims against the actual source data. When someone makes a claim that contradicts readily available information, either:

  1. They haven't read their own sources carefully

  2. They're hoping you won't check

  3. They're defining terms in unconventional ways

Always verify bold claims against primary sources.


The Real Lesson

The most important lesson isn't about AI specifically. It's about critical thinking:

Confident arguments with impressive-looking data can still be deeply flawed.

In an era of information abundance, the scarce resource isn't data, it's the ability to think critically about data. To ask:

  • What's missing?

  • Who benefits?

  • What would I need to see to change my mind?

  • Am I being persuaded or informed?

The next time you see a viral post with professional charts and authoritative statistics, remember:

  • Visuals can be manipulated

  • Context can be omitted

  • Metrics can be cherry-picked

  • Conclusions can be predetermined

And the solution isn't cynicism, it's healthy skepticism paired with intellectual humility.

Be willing to say: "I don't know."

Be willing to ask: "How do you know?"

Be willing to admit: "The data is consistent with multiple interpretations."

Because the alternative is being swept along by whoever makes the most confident claims with the prettiest charts—which is exactly how bubbles form, whether we're in one now or not.

jason thompson

Jason Thompson is the CEO and co-founder of 33 Sticks, a boutique analytics company focused on helping businesses make human-centered decisions through data. He regularly speaks on topics related to data literacy and ethical analytics practices and is the co-author of the analytics children’s book ‘A is for Analytics’

https://www.hippieceolife.com/
Next
Next

The Map That Never Dies: Why This Misleading Electoral Visualization Still Works in 2025