Why Your Salesforce Reports Are Lying to You

I’ve worked inside a lot of Salesforce environments. The reporting is almost always wrong, and it’s almost never Salesforce’s fault.

MARCH 2026  ·  SALESFORCE  ·  REPORTING  ·  DATA QUALITY

Here’s the version of this conversation I have most often: a VP of Sales or a RevOps lead tells me their Salesforce reports have been wrong for a while. Not subtly wrong — obviously, embarrassingly wrong. Pipeline numbers that don’t match what the reps say. Win rates that change depending on who runs the report. Revenue figures that can’t be reconciled with finance. The team has stopped trusting the data. Leadership has stopped looking.

The assumption, usually, is that Salesforce is the problem. They bought the wrong tool, or they need to upgrade, or they need to configure it differently. My job, in the first conversation, is almost always to reframe that.

Salesforce is not the problem. Salesforce is a container. What’s in the container is the problem.

What Salesforce Was Actually Designed to Do

Salesforce is a CRM. It was designed to manage relationships — contacts, accounts, opportunities, activities. It does that well. What it was not designed to do is serve as a reliable analytics layer at scale.

The reporting features inside Salesforce — and there are a lot of them — are built on top of live transactional data. Every report you run is a query against the CRM database as it exists right now. There’s no separation between the data you’re reporting on and the data your reps are actively editing. There’s no transformation layer. There’s no enforced data model. There are filters and field selections and summary formulas, but underneath all of that, you’re pointing at raw CRM records and hoping they reflect reality.

When the underlying records are clean and consistently maintained, this works fine. When they’re not — and in most organizations I work with, they’re not — no amount of report configuration fixes the output.

The Five Problems I See Most Often

Duplicate records polluting pipeline metrics. Salesforce makes it easy to create duplicate accounts, contacts, and opportunities. It does not make it easy to find and merge them. In a sales environment with moderate turnover and no duplicate prevention rules, you will accumulate duplicates. Those duplicates inflate your pipeline, distort your conversion metrics, and create situations where the same deal appears to have closed multiple times. I’ve seen pipeline reports inflated by 30% or more because of duplicates that had been sitting there for two years.

Custom fields used inconsistently by different reps. Someone created a “Deal Source” field eighteen months ago. Half the team fills it out. The other half doesn’t. Of the half that fills it out, three reps use “Inbound” to mean something different than the other four. Now you run a source attribution report and the numbers are meaningless — not because the field is wrong, but because nobody agreed on what it meant before it went live.

Stage definitions that vary by rep or region. “Pipeline” means whatever the person running the report thinks it means. If your “Proposal Sent” stage means different things to your East and West Coast teams — one uses it when the deck is sent, the other uses it when the contract is sent — your pipeline report is combining two different things into one number. This is almost never documented. It surfaces when you dig into individual deals and start asking why.

Reports that miss records closed before the filter date. This is a subtle one that causes real problems. Say you’re running a pipeline report filtered to opportunities created in the last six months. An opportunity was created eight months ago, moved to Closed Lost five months ago, and then re-opened last month. Depending on how the report is configured, it may or may not appear. Standard Salesforce reports do not always behave intuitively when close dates, created dates, and stage changes interact with each other. You end up with reports that look right but are silently missing records that should be included.

“Last activity” and “created date” filters that exclude active accounts. A filter that shows only accounts with activity in the last 90 days sounds reasonable. What it actually does is exclude every account your best customers haven’t touched in three months — which, in an enterprise sales environment, is most of them. Reports filtered on activity dates are useful for some things and actively misleading for others. The problem is that nobody usually documents which reports use these filters, so the exclusions are invisible.

Why You Can’t Fix This Inside Salesforce

The instinct is to fix reporting problems by improving the reports — better filters, more complex formulas, joined report types. This works up to a point. But Salesforce’s report builder is not a data modeling tool. It doesn’t let you define metrics centrally. It doesn’t enforce consistent logic across reports. It doesn’t give you a single place to say “this is what win rate means, and every report that shows win rate uses this definition.”

What you get instead is a library of reports each built by a different person, each making slightly different assumptions, each correct in isolation and collectively unreliable. Finance built theirs for one purpose. Sales built theirs for another. RevOps has a third set. They all show different numbers for the same thing, and the answer to “which one is right” is usually “it depends on what question you’re asking” — which is not a useful answer when the CEO is asking about Q2 performance.

What the Actual Fix Looks Like

The solution is to stop treating Salesforce as your analytics layer and start treating it as your data source. Extract the data to a proper analytics environment, define your metrics outside of Salesforce’s filter logic, and build your dashboards on top of clean, modeled data.

This does not have to be expensive. You don’t need a data warehouse with a six-figure implementation budget. A basic SQL layer — even a simple extract to a cloud database like BigQuery or Postgres — gives you what you need: the ability to define joins, transformations, and metric logic in code, version-controlled, applied consistently across every downstream report.

When “win rate” is defined in one place as a SQL calculation and every dashboard that shows win rate uses that calculation, you stop having conversations about whose number is right. The number is just the number. If it’s wrong, you fix the definition once and every report updates.

This is what I mean when I talk about an analytics rebuild. It’s not a Salesforce project. It’s a data project that uses Salesforce as one of its inputs.

The “Our Salesforce Admin Will Fix It” Objection

I hear this one regularly. The answer is: maybe, for some of it.

A good Salesforce admin is excellent at data quality hygiene. Deduplication rules, validation rules, field-level help text, process automation that enforces consistent data entry — all of that is admin territory and a good admin will make a real difference on it. If your problem is primarily a data quality problem, start there.

But data quality hygiene is not the same as analytics architecture. Knowing how to configure Salesforce is not the same as knowing how to build a metrics layer that produces reliable numbers at scale. Most Salesforce admins — even very good ones — have not designed a dimensional data model or written the SQL to transform CRM exports into a clean analytics schema. That’s a different skill set, and it’s not fair to expect the same person to have both.

If your Salesforce admin is telling you they can fix the reporting, ask them specifically: how will you ensure that win rate means the same thing across every report? What’s the plan for handling historical data when stage definitions change? If they have concrete answers, great. If they look uncertain, that’s useful information.

A Self-Diagnostic: Is Your Problem Fixable Inside Salesforce?

Three questions that will help you figure out whether your reporting problems can be solved within Salesforce or need to be addressed at a different layer:

  • Can two people run the same report and get the same number? If not, and the discrepancy isn’t explained by different filter choices, you have a data modeling problem that Salesforce can’t fix.
  • Do your key metrics have agreed-upon definitions that are written down somewhere? If “pipeline” or “win rate” or “time to close” means different things to different people, no amount of report configuration will produce consistent output.
  • When a number looks wrong, can anyone explain why? If the answer is always “we’re not sure,” the problem isn’t a missing filter — it’s that the data isn’t modeled in a way that supports reliable analysis.

If you answered no to two or three of these, the problem is almost certainly not fixable inside Salesforce. You need to pull the data out, clean it up, and build your reporting on top of something designed for analytics rather than relationship management.

If you’re based in Ohio or the Midwest and want to understand the options, I’ve written more about what this type of work looks like from a regional BI consulting perspective. And if you want to understand the full rebuild process, the analytics rebuild service page goes into more detail on what that engagement actually involves.

The short version: bad Salesforce reporting is almost always a solvable problem. It just usually requires solving it at a layer below Salesforce.

If your reporting is wrong, let’s figure out where it’s breaking.

Book a 30-minute call. I’ll tell you within the first 15 minutes whether the problem is fixable and what fixing it would actually involve.

Book a 30-Minute Call