How to Build a High-Performing and Data-Driven Inside Sales Team at Scale with WalkMe’s VP of Sales, Aliisa Rosenthal (Video)

Inside sales used to be the “phone team.” Today, it’s the revenue engine that can reach more buyers, move faster, and scale without turning your budget into a bonfire.
The catch? You don’t scale inside sales by yelling “MORE CALLS!” into the void. You scale it by building a system where data tells you what’s working, what’s leaking,
and what to fix nextthen you coach and compensate the team to match reality.

In a popular video conversation featured by SaaStr, Aliisa Rosenthal (WalkMe’s VP of Sales at the time) lays out a practical, no-magic-wand approach:
collect clean data, analyze the funnel, run disciplined experiments, and drive behavior through compensation that matches business priorities.
That’s the backbone. This article turns it into a complete “build it, run it, scale it” blueprint you can apply whether you’re hiring your first SDRs or managing a global inside sales org.

What “High-Performing” Actually Means (Spoiler: It’s Not Just Quota)

A high-performing inside sales team is one that hits goals predictably. Not “hero quarter then chaos quarter.” Predictability comes from:

  • Repeatable pipeline creation (not luck-based lead flow)
  • Consistent conversion at each stage (with known, fixable bottlenecks)
  • Healthy deal quality (wins that renew and expand)
  • Reliable forecasting (so Finance doesn’t develop trust issues)
  • Fast ramp + durable retention (because churny teams can’t scale)

Performance is the result. The cause is a system: clear process, clean data, tight coaching, and incentives that don’t accidentally reward the wrong behavior.

Why Data-Driven Inside Sales Wins at Scale

Inside sales generates a huge volume of “truth signals”: outreach touches, connect rates, discovery quality, demo outcomes, follow-up speed, pricing conversations,
stage movement, sales cycle length, win/loss reasons, renewal healththe list goes on. When you track the right signals, you stop guessing.
You can pinpoint exactly where momentum dies and fix it with targeted changes.

Data-driven doesn’t mean “spreadsheet cosplay.” It means your team can answer, quickly and confidently:
Where are we winning? Where are we leaking? What should we do next week that makes next month better?

Video Playbook: The Four Moves Aliisa Rosenthal Emphasizes

1) Collect Data (Because You Can’t Improve What You Don’t Measure)

The first scaling mistake is trying to “optimize” with missing or messy data. A robust CRM is the foundation, but your CRM alone won’t tell the full story.
You need a stack that captures the customer journey end-to-end:

  • CRM for pipeline stages, activity logging, account history, and lifecycle tracking
  • Conversation intelligence for what actually happened on calls (objections, talk/listen balance, key topics, next steps)
  • Forecasting / revenue platform for trend signals, risk, and consistency across managers

The real goal isn’t “more tools.” It’s one source of truth with shared definitions (what counts as a qualified opportunity, what “commit” means, what stage exit criteria are).
Garbage data scales beautifully… into a garbage forecast.

2) Analyze Your Funnel (Find the Leaks Before You Buy More Leads)

Once data is flowing, you map your funnel like an engineerbecause funnels don’t have “vibes,” they have conversion rates.
Break your pipeline into stages with clear entry/exit criteria, then measure:

  • Lead → Qualified: Is Marketing sending the right people, or just… people?
  • Qualified → Discovery Held: Are reps getting meetings, or collecting “maybe next quarter” as a hobby?
  • Discovery → Next Step: Are discoveries real, or are they demos wearing a trench coat?
  • Proposal → Close: Is pricing shock killing deals late, or is value unclear early?
  • Close → Renewal: Are you selling outcomes or selling features that never get adopted?

Add time-based diagnostics: stage aging, deal velocity, and how long it takes to get from first meeting to a clear “yes/no.”
When you know where deals stall, your coaching becomes specific instead of motivational-poster-based.

3) Experiment (Use Your Sales Cycles Like a Lab)

Inside sales has a built-in advantage: volume. More cycles means more opportunities to test changes quickly and learn what’s causal (not coincidental).
But experimentation only works if you keep it disciplined:

  • Pick one variable (call length, qualification rules, pricing timing, demo format)
  • Define the success metric (stage conversion, ASP, sales cycle, win rate)
  • Run it long enough to avoid “we tried it for three days and vibes were off” conclusions
  • Document results so new hires don’t re-learn old lessons the hard way

A simple example highlighted in the WalkMe story: changing how the intro conversation is run (and what it’s called) can materially change conversion.
The broader lesson is bigger than one tactic: treat your motion as a product. Ship improvements. Measure adoption. Iterate.

4) Drive Behavior with Incentives (ICPs That Match Your Strategy)

If your compensation plan rewards speed, reps will optimize for speed. If it rewards deal size, reps will optimize for big deals.
If it rewards “meetings booked,” you’ll get meetings bookedsome of which will be tragically unqualified.

The strongest compensation plans are:

  • Simple (reps can explain it without a whiteboard)
  • Aligned to the company’s current priorities
  • Stable enough to build habits, but flexible enough to evolve year to year
  • Balanced so you don’t accidentally punish the right behavior (like careful qualification)

The Scaling Blueprint: Build the System Behind the Team

Step A: Nail Your Org Design Before You Add Headcount

Scaling inside sales isn’t just “hire more reps.” It’s deciding how work flows through the organization.
Common high-performing patterns include:

  • SDR/BDR + AE: SDRs generate and qualify; AEs run deals to close
  • Pods: SDR + AE + Solutions/CS partner per segment (great for focus and accountability)
  • Segment specialization: SMB velocity motion vs mid-market consultative vs enterprise multi-threaded

The trick is to keep handoffs crisp. Every handoff is a chance for context to die. Use templates, call clips, and standardized notes so the customer feels continuity,
not a sales relay race where the baton keeps getting dropped.

Step B: Define the Metrics That Matter (and Ban the Rest from Meetings)

Data-driven does not mean “track everything.” It means tracking what predicts results and informs action.
A clean metric stack usually includes:

Core Outcome Metrics (the scoreboard)

  • Quota attainment (team + rep)
  • Win rate by segment and source
  • Average sales price (ASP) and discounting trends
  • Sales cycle length and deal velocity
  • Renewal and expansion health (if applicable)

Funnel Health Metrics (the engine diagnostics)

  • Stage-to-stage conversion (where deals leak)
  • Pipeline coverage (enough “at-bats” to hit the number)
  • Stage aging (where deals go to nap indefinitely)
  • Response speed to inbound and late-stage buyer questions

Activity Quality Metrics (the behavior drivers)

  • Connect rate and meeting set rate (by channel)
  • Discovery quality (are reps diagnosing outcomes or rushing to demo?)
  • Call effectiveness signals like question depth, customer talk time, and next-step clarity

The rule: if a metric doesn’t lead to a decision, it doesn’t belong in your weekly review.

Step C: Build a Coaching System That Scales (Not Just “Manager Instinct”)

Great managers coach. Scalable organizations systematize coaching so it doesn’t depend on one heroic leader listening to 60 calls a week.
A scalable coaching system typically includes:

  • Scorecards for discovery, demo, and negotiation (clear expectations)
  • Call libraries of “gold standard” examples by segment
  • Weekly deal reviews focused on risk, next steps, and mutual action plans
  • Monthly skill focus (objection handling, multithreading, value proof)
  • 1:1s that use data (conversion and activity quality, not just pipeline narration)

A practical benchmark many teams use from conversation analytics: top reps often create a balanced dialogueasking strong questions, letting buyers talk,
and keeping the call from turning into a TED Talk nobody requested.

Step D: Make Forecasting Boring (Boring Forecasting Is a Flex)

If your forecast feels like a suspense thriller, something is broken. Forecast accuracy improves when:

  • Stages have strict exit criteria (no “proposal sent” if it was a vague email attachment)
  • Managers inspect evidence (call notes, mutual plans, customer signals) instead of accepting optimism
  • Pipeline coverage is healthy so one slip doesn’t wreck the quarter
  • Risk is visible early (stalled deals, single-threading, no business case, no timeline)

The mindset shift: forecasting isn’t a once-a-month ritual. It’s the output of daily pipeline hygiene plus consistent deal inspection.

Step E: Treat Enablement Like a Product (Onboarding, Certification, Iteration)

Scaling requires fast ramp. Fast ramp requires onboarding that teaches how you sell, not just what you sell.
Strong onboarding programs typically include:

  • ICP + personas: who you win with, who you avoid, and why
  • Messaging: outcomes, pains, triggers, and proof points
  • Process: stage definitions, qualification, next steps
  • Skill drills: discovery role plays, objection sparring, negotiation practice
  • Certification: reps “earn” production by demonstrating competence

Then you improve onboarding using the same philosophy you use for the sales funnel: measure where new reps struggle, update the program, repeat.

A Practical Example: Turning a Messy Funnel into a Predictable Machine

Imagine a mid-market SaaS company with 12 AEs and 6 SDRs. The CEO says, “We need to double inside sales this year.” The CRO hears, “Hire 10 more reps.”
The RevOps leader hears, “Please no.”

A data-driven approach might look like this:

  1. Week 1–2: Clean the foundation. Standardize stages, enforce required fields, and define what “qualified” means.
    Set up dashboards for stage conversion, stage aging, and pipeline coverage.
  2. Week 3–4: Find the leak. Discovery-to-demo conversion is strong, but demo-to-proposal is weak.
    Call reviews show reps are jumping into features too early and not confirming decision criteria.
  3. Month 2: Run one experiment. Introduce a structured discovery checklist and require a “problem + impact + success metrics” recap
    before scheduling a demo. Track demo-to-proposal conversion.
  4. Month 3: Align incentives. Add a small quality kicker tied to qualified stage advancement (not just meetings booked),
    and reward clean, verified next steps.
  5. Quarter 2: Scale what works. Roll the discovery framework into onboarding, add call library examples, and hire into the now-proven motion.

The result isn’t just better performance; it’s confidence. You’re not scaling guesses. You’re scaling a tested system.

Common Scaling Mistakes (So You Can Avoid Them Like a Pro)

  • Hiring ahead of process: Headcount won’t fix unclear stages, weak qualification, or messy handoffs.
  • Measuring activity without quality: High activity with low conversion is just fast failure.
  • Overcomplicating comp: If reps need a calculator to understand it, it won’t drive consistent behavior.
  • Using tools without governance: Tools amplify your systemgood or bad.
  • Skipping the “why” in coaching: “Do more” isn’t coaching. Specific feedback is.

Conclusion: Scale the System, Not the Chaos

Building a high-performing inside sales team at scale is less about charismatic pep talks and more about operational design:
collect trustworthy data, analyze the funnel like a scientist, experiment like a product team, and align incentives to the outcomes you want.
The WalkMe-focused video playbook is powerful because it’s grounded: it assumes growth comes from iteration and clarity, not mystery.

If you do this well, your inside sales org becomes the rarest thing in business: a growth engine that’s both fast and predictable.
And yes, Finance will finally stop flinching when you say the word “forecast.”


Real-World Experiences (500+ Words): What Scaling Actually Feels Like in Practice

The funny thing about “scaling inside sales” is that it’s often pitched like a tidy diagram: add reps, add pipeline, add revenue. In reality, scaling feels more like
renovating a house while you’re still living in it. The kitchen works… unless you open the wrong cabinet, and then a pan falls on your foot. (Ask me how I know
actually don’t, I’m still emotionally recovering.)

Experience #1: The discovery call that saved a quarter. One mid-market team noticed their pipeline looked healthy, but deals were stalling after demos.
The immediate temptation was to blame pricing or competition. Instead, they pulled call recordings and found something simpler: reps were treating “discovery” as a warm-up demo.
Buyers left calls impressed by features, but unclear on outcomes. The fix wasn’t a new scriptit was a new habit. Reps had to summarize the customer’s problem,
quantify impact, and confirm success metrics before scheduling a demo. Conversion improved, and forecasting became dramatically less dramatic. The best part?
The reps reported less burnout because they stopped dragging unqualified deals through the pipeline like a suitcase with a broken wheel.

Experience #2: The comp plan that accidentally rewarded chaos. Another team built an incentive around “meetings booked” for SDRs.
Meetings skyrocketed. Everyone celebrated. Then AEs quietly began treating SDR meetings like spam: lots of volume, low intent.
The SDR team wasn’t “bad”they were rational. They optimized for what got paid. The leadership team reworked incentives to include a quality gate:
meetings counted only when they met ICP criteria and advanced to a qualified opportunity. Predictably, meeting volume dipped… and pipeline quality soared.
The lesson: comp plans are not a spreadsheet. They’re behavioral design.

Experience #3: The day dashboards changed coaching forever. A sales manager used to spend Mondays in “pipeline storytelling theater,”
listening to reps narrate deals with heroic optimism. When the team implemented consistent stage definitions and a simple set of funnel dashboards,
the conversation changed. Instead of “I feel good about this account,” it became “This deal has been stuck in evaluation for 21 days, we only have one champion,
and we haven’t confirmed timeline.” Coaching turned from motivational to surgical. Reps started asking for feedback earlier because they could see risk before it exploded.

Experience #4: The experiment that made onboarding 10x easier. A growing inside sales org struggled with ramp time because every manager taught the job differently.
Leadership decided to run experiments like a product team: they tested one onboarding change per monthcall library clips, certification role plays,
and a simple 30-60-90 plan with weekly skill checks. Each change was measured against time-to-first-qualified-opportunity and early-stage conversion.
Within a few cycles, onboarding became consistent, new hires felt less lost, and managers reclaimed hours previously spent repeating the same fundamentals.

Experience #5: The “tools won’t save you” moment. A team bought a shiny new tech stack to become “data-driven.”
Adoption was low, dashboards were ignored, and forecast calls were still chaotic. The breakthrough wasn’t another toolit was governance:
shared definitions, required fields, stage exit criteria, and a weekly rhythm that forced the team to use the system.
Once the operating cadence matched the tools, the tools finally did what they promised.

These experiences all point to the same truth: scaling isn’t adding. It’s clarifying. When the system is clear, the team can move fast without breaking trust,
breaking process, or breaking everyone’s spirit during end-of-quarter season.