Attribution modelling & analytics
How to's

Got insights from this post? Give it a boost by sharing with others!

Windsor MCP for Sales: Pipeline Review, Forecasting, and Rep Performance

windsor mcp for sales teams prompts

Most sales teams do not have a data problem. They have a data access problem.

The answers to your biggest sales questions already exist in your CRM: which deals are at risk, which reps are outperforming, which lead sources close, why deals are being lost, and what revenue is likely to land this quarter.

The challenge is that this data is scattered across reports, CRM objects, spreadsheets, and disconnected tools like HubSpot, Salesforce, Pipedrive, and GoHighLevel, etc.

Windsor MCP solves this by connecting your CRM and sales data directly to ChatGPT, Claude, and other AI tools — without exports, SQL, or manual reporting.

🚀 Start your free trial of Windsor.ai and connect your sales data to AI in less than a minute → https://onboard.windsor.ai/.

This guide covers the most common and high-value ways sales teams are using Windsor MCP, with ready-to-use prompts for each.

How Windsor connects your sales data to AI

⚙️ How to set up Windsor MCP for your CRM: Connect your CRM & sales platforms at onboard.windsor.ai, then follow the Windsor MCP setup guide for the AI platform you use. Setup takes just minutes, no code required.

1. Connect your data to Windsor.ai

Connect Salesforce, HubSpot, Pipedrive, or GoHighLevel to Windsor through a secure authorization flow. Windsor automatically pulls CRM data across deals, leads, contacts, accounts, activities, and pipeline stages, then normalizes it into a format ready for AI analysis.

You can also connect multiple sources at once. Blend CRM data with ad spend, marketing performance, and revenue data to ask questions that span the full funnel, from first touch to closed deal.

2. Connect Windsor MCP to your AI tool

Connect your Windsor data to Claude, ChatGPT, or another AI tool through Windsor MCP. Setup instructions for every platform are here.

3. Ask sales questions in plain language

No SQL. No pivot tables. No waiting for a RevOps ticket. Ask natural-language questions like “Which deals haven’t been touched in two weeks?” or “Which rep has the highest win rate this quarter?” and get insightful, data-backed answers in seconds.

The prompts below show exactly how to query your data effectively across the most common sales use cases.

10 ways sales teams use Windsor MCP for AI analytics (plus prompts)

1. Run a weekly pipeline health check

The challenge:

Most pipeline reviews are slow, manual, and outdated before the meeting even starts. Sales managers spend time pulling reports, updating spreadsheets, and asking reps for status updates instead of focusing on the deals that actually need attention.

The data needed for a useful pipeline review already exists in the CRM: deal volume by stage, stalled opportunities, close rates, deal velocity, aging pipeline, and rep performance. The problem is that it is spread across multiple reports and objects, making it difficult to get a clear picture quickly.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data connected via Windsor, run a pipeline health check for all open deals.

Show me:
- Total open pipeline value and deal count by stage
- Weighted pipeline value (deal amount × close probability) by stage
- Deals expected to close this month vs. next month, by value
- Deals with no recorded activity in the last 14 days  list them by owner, deal value, and current stage
- Any deal where the expected close date has passed but the deal is still open

Break down stalled deals by owner so I can see which reps have the most at-risk pipeline.

Format the output as a summary table followed by a stalled deals list, ordered by deal value descending.

What you’ll get:

A pipeline summary table showing stage distribution and weighted value, followed by a prioritized stalled deals list with owner, value, stage, and days since last activity, ready to use as the meeting agenda.

2. Build a revenue forecast 

The challenge:

Most sales forecasts are based on guesswork. Reps often overestimate their deals, managers apply rough discounts to the pipeline, and the final number ends up being more opinion than analysis.

A better forecast should be based on real CRM data: how often deals in each stage actually close, how long they typically take, how much pipeline is aging, and which reps consistently over- or underperform. That produces a forecast grounded in actual deal behavior, not optimism.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, build a revenue forecast for the next 90 days.

For all open deals with a close date in the next 90 days:
- Calculate weighted revenue per deal (deal amount × close probability)
- Group by month: this month, next month, month after
- For each month, show total deal count, total unweighted pipeline, and total weighted pipeline

Segment deals into confidence tiers:
- High confidence: close probability above 70%
- Medium confidence: 40–70%
- Low confidence: below 40%

For the high-confidence tier, flag any deal with no activity in the last 7 days  these are at risk of slipping despite their current probability.

Also flag any deal where the close date is within 14 days but the deal is in an early pipeline stage  likely to push.

Output as a monthly forecast table by confidence tier, followed by at-risk deal flags and a 2-sentence forecast summary.

What you’ll get:

A month-by-month weighted revenue table broken down by confidence tier, with specific deal flags for the deals most likely to slip, giving sales leaders a realistic number and a list of conversations to have before the month closes.

3. Compare rep performance across win rate, deal size, and cycle length

The challenge:

Total closed revenue is the most visible rep metric, and one of the least useful for coaching. A rep closing many small deals quickly and a rep closing large deals slowly can produce similar revenue numbers while needing completely different support.

Understanding rep performance requires looking at win rate, average deal size, sales cycle length, and activity volume simultaneously. Most CRM dashboards show these metrics in isolation rather than as a combined picture.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, compare sales rep performance for deals created or closed in the last 90 days.

For each deal owner, calculate:
- Total deals created
- Total deals won and win rate (%)
- Total closed revenue (won deals only)
- Average deal size (won deals)
- Average sales cycle length in days (from deal creation to closed-won)
- Total recorded activities (calls, emails, meetings) in the period
- Activity-to-close ratio: activities per closed deal

Rank reps by win rate and by total revenue separately.

Flag any rep where:
- Win rate is more than 15 percentage points below the team average
- Average sales cycle is more than 30% longer than the team average
- Activity count is in the top 25% but win rate is below average (high effort, low conversion  possible coaching opportunity)

Output as a rep performance table, followed by a 3-sentence summary of the standout patterns and the most important gaps to address.

What you’ll get:

A ranked rep scorecard that separates volume from quality, giving managers a specific, data-backed agenda for their next 1:1s.

4. Diagnose why deals are being lost

The challenge:

Lost deal analysis is one of the highest-value activities a sales team can do, and one of the least consistently done.

Most CRMs capture a loss reason field. Almost nobody aggregates it systematically, which means the same pricing objections, competitor losses, and ghosting patterns repeat quarter after quarter without ever being addressed.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, analyze closed-lost deals from the last 90 days.

For all deals marked as closed-lost in the period:
- Group by loss reason and calculate: deal count, total lost revenue value, and average deal size per reason
- Break down each loss reason by pipeline stage where the deal was lost (early-stage losses and late-stage losses have very different implications)
- Break down loss reasons by deal owner  are certain reps losing to the same reason more than others?
- Identify the single loss reason responsible for the most total lost revenue

For deals lost at a late stage (past the midpoint of the pipeline):
- What was the average deal size?
- How long had they been in the pipeline?
- Were there any activity gaps in the 14 days before the deal was marked lost?

Output as a loss reason table ranked by total lost revenue, a stage-level breakdown, and a 3-sentence summary of the top patterns that, if addressed, would have the highest revenue recovery impact.
Suggest an action plan how to resolve these most common reasons and win more deals.

What you’ll get:

A clear breakdown of why deals are being lost, including the most common loss reasons by stage, rep, and segment, making it easier to identify coaching opportunities, process gaps, and patterns that need attention.

5. Identify which lead sources actually produce closed revenue

The challenge:

Marketing reports on lead volume. Sales reports on closed revenue. Neither team typically has a clean view of which lead sources produce deals that actually close: at what value, at what win rate, and with what sales cycle length.

This gap leads to budget being spent on channels that generate activity but not revenue, and underinvestment in channels that reliably produce the best customers.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, analyze which lead sources produce the most revenue, not just the most leads.

For each lead source recorded in the data, calculate:
- Total contacts or leads created in the lastmonths
- Number that progressed to an open opportunity or deal
- Number that converted to closed-won
- Lead-to-close conversion rate (%)
- Total closed revenue from this source
- Average deal size from this source
- Average sales cycle length from first touch to close

Rank sources by total closed revenue (highest to lowest).

Flag any source where:
- Lead volume is high but close conversion rate is more than 30% below the average (quantity without quality)
- Close conversion rate is above average but volume is low (high efficiency source worth scaling)
- Average deal size is more than 25% below the average (lower-value customers regardless of volume)

Output as a source performance table followed by a 3-sentence summary identifying the highest-quality acquisition channels and the biggest gaps between lead volume and actual revenue.

What you’ll get:

A source-to-revenue table that shows which channels are producing real pipeline and which are generating noise; the input needed to align marketing spend with sales outcomes rather than treating them as separate metrics.

6. Spot activity patterns that predict closed deals

The challenge:

Every sales team has a theory about what drives results: more calls, faster follow-up, more meetings, multi-threading accounts. But these theories are rarely tested against actual deal data.

Knowing which specific activities, at which frequency, are statistically correlated with closed deals is more valuable than any playbook written from intuition.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, analyze the relationship between sales activity patterns and deal outcomes over the lastmonths.

For all deals closed (won and lost) in the period, retrieve the associated activity records and calculate:

For closed-won deals:
- Average total activities per deal
- Average number of meetings, calls, and emails separately
- Average time from first activity to close
- Average time between activities (response cadence)
- Average number of contacts engaged per deal (multi-threading signal)

For closed-lost deals:
- The same metrics as above

Compare the two groups:
- Which activity metrics show the largest difference between won and lost deals?
- Is there a minimum activity threshold below which deals almost never close?
- Do deals with faster average response cadence close at a higher rate?

Identify the activity profile of the top 20% of deals by close rate and describe it in plain language (e.g., "Deals that closed had an average of X meetings and first activity within Y days of creation").

Output as a won vs. lost activity comparison table, followed by a plain-language description of the winning activity profile that can be used as a sales coaching standard.

What you’ll get:

A data-backed description of the activity patterns that predict closed deals, giving sales managers a specific standard to coach toward and giving reps a benchmark to measure their own behavior against.

7. Find deals that are about to slip and intervene early

The challenge:

Many deals do not suddenly disappear. They start showing warning signs weeks before they slip: close dates move out, activity slows down, no meetings are booked, or decision-makers stop responding.

These signals already exist in the CRM, but they are rarely visible early enough to act on. By the time a deal is marked as at risk in the forecast, there is often little time left to recover it.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, identify deals that show early signs of slipping  before they show up as problems in the forecast.

Flag any open deal that meets one or more of these criteria:
- No activity recorded in the last 10 days, and the close date is within 30 days
- Close date is within 14 days but the deal is in a stage that typically takes longer than 14 days to close
- Close date has been pushed at least once, and no activity has been recorded since the push
- Deal has been in the same stage for more than [SPECIFY: e.g., 21] days without advancing

For each flagged deal, show:
- Deal name and owner
- Current stage and deal value
- Days since last activity
- Close date and whether it has been pushed previously
- Recommended action: Immediate outreach / Stage review / Re-qualification

Rank flagged deals by value (highest first).

Output as a slip-risk watchlist with the recommended action column. Formatted for direct use as a pre-call review list.

What you’ll get:

A ranked list of at-risk deals with the context needed to understand why they may slip and what action should be taken next.

8. Connect ad spend to closed revenue for true marketing attribution

The challenge:

Marketing measures ROAS based on attributed conversions in the ad platform. Sales measures pipeline and closed revenue in the CRM. The two numbers almost never reconcil, because they’re measuring different things at different points in the funnel.

The only way to answer “Which ad campaigns actually produced closed revenue?” is to connect ad spend data directly to CRM deal data. Windsor makes this possible without any custom integration work.

Sample prompt:

Using my [Salesforce / HubSpot] CRM data and [Google Ads / LinkedIn Ads / Meta Ads] data connected via Windsor, calculate true marketing attribution based on closed revenue  not just leads or attributed conversions.

For each campaign in the ad platform data from the last 90 days:
- Total ad spend
- Number of leads or contacts created in the CRM attributed to this campaign (match on UTM campaign name or lead source)
- Number that progressed to an open opportunity or deal
- Number that converted to closed-won
- Total closed revenue attributed to this campaign
- True ROAS (closed revenue ÷ ad spend)
- Average deal size from this campaign
- Average sales cycle from lead creation to close

Rank campaigns by true closed-revenue ROAS from highest to lowest.

Flag any campaign where:
- Ad platform ROAS is more than  the closed-revenue ROAS (platform overclaims revenue not reflected in closed deals)
- Closed-revenue ROAS is above 3:1 but ad spend share is below 10% of total (underinvested, high-efficiency campaign)

Output as a cross-channel attribution table followed by a 3-sentence summary on where to reallocate budget based on revenue impact rather than platform-reported metrics.

What you’ll get:

A campaign-level closed-revenue attribution table that shows what each marketing dollar actually produced in the CRM, replacing the platform-reported ROAS that sales teams rarely check.

9. Build a weekly sales report 

The challenge:

Sales reporting is one of the most time-consuming recurring tasks in sales operations. Pulling numbers from the CRM, formatting them for leadership, calculating week-over-week changes, and adding the pipeline summary is a task that should take 5 minutes but often takes an hour, and it happens every week.

With Windsor MCP, the same report can be generated in a single prompt.

Sample prompt:

Using my [Salesforce / HubSpot / Pipedrive] data in Windsor, generate the weekly sales report for the week ending [DATE], compared to the prior week.

Include:

New activity this week:
- New leads or contacts created (this week vs. last week)
- New deals or opportunities created (this week vs. last week, with total value)
- Deals advanced to a later stage (count and value moved forward)

Closings:
- Deals closed-won this week: count, total revenue, average deal size
- Deals closed-lost this week: count, total lost value, top loss reason
- Win rate for the week

Pipeline health:
- Total open pipeline value at end of week (vs. end of prior week)
- Weighted pipeline value
- Deals expected to close in the next 14 days (count and value)

Team activity:
- Total activities logged this week (calls, emails, meetings) by rep
- Rep with most activities; rep with most closings

Format as a structured weekly summary report with clear sections and period-over-period percentage changes. Keep it concise enough to read in under 3 minutes.

What you’ll get:

A structured, section-by-section weekly sales report with all the numbers leadership expects generated in seconds, in a consistent format, every week.

10. Identify at-risk accounts before churn 

The challenge:

Churn rarely arrives without warning.

Before a customer churns, there are almost always signals in the CRM: declining activity, open support cases that aren’t resolving, missed renewal check-ins, or key contacts going dark

. Sales and customer success teams that catch these signals 60 days early have time to intervene. Teams that catch them 5 days before renewal don’t.

Sample prompt:

Using my [Salesforce / HubSpot] data in Windsor, identify existing customer accounts showing early signs of churn risk.

For all accounts currently classified as active customers:
Flag accounts where:
- Renewal opportunity is within the next 60 days AND no activity has been recorded against the account or renewal opportunity in the last 21 days
- There are more than 2 open support cases or tickets in the last 30 days with no resolution (high friction signal)
- No contact at the account has engaged with any activity (meeting, call, or email) in the last 45 days (relationship going dark)
- The main contact or economic buyer has changed in the last 60 days (champion displacement)

For each flagged account, show:
- Account name and assigned owner
- Total account revenue or contract value
- Days since last activity
- Renewal date (where available)
- Active issues: open cases, contact changes, activity gaps
- Recommended urgency: Immediate action (renewal within 30 days + activity gap) / Monitor closely / Schedule check-in

Rank by contract value within each urgency tier. 

Suggest an outreach email to re-activate these at-risk accounts.

What you’ll get:

A prioritized churn risk list showing which accounts need attention first, why they are at risk, and what factors may be driving the decline, along with a re-activation email.

Pro tips for getting the most from Windsor MCP for sales

These tips will help you turn Windsor MCP into a repeatable, high-value part of your sales team’s workflow, not just a one-time analysis tool.

Run your pipeline review prompt before the meeting, not during it.

Generate the health check (Prompt 1) 30 minutes before your weekly call and share the output with attendees as pre-reading. This shifts the meeting from data collection to decision-making, which is where the time is actually worth spending.

Blend your CRM with ad platform data to close the attribution loop.

CRM data alone tells you what closed. Ad data alone tells you what was clicked. Connecting both in Windsor (Prompt 8) lets you ask the question every sales and marketing team argues about — which campaigns actually produced revenue — and get a single, shared answer. This is one of the most common requests from revenue operations teams and one of the hardest to answer without cross-source data blending.

Be specific about thresholds in your prompts.

Instead of asking for “stalled deals,” specify “no activity in the last 10 days.” Instead of “at-risk pipeline,” specify “close date within 14 days and no activity since the close date was set.” Specific thresholds produce actionable lists; vague criteria produce long ones that nobody acts on.

Use the weekly report prompt (Prompt 9) as a template, then customize the format for your audience.

Ask the AI to reformat the output as an executive summary for leadership, a bullet-point Slack update for the team, or a structured table for a board deck. The same underlying data query can serve three different audiences in three different formats without any additional data work.

Save your best prompts as a shared document your whole team can use.

Once you’ve refined a prompt to produce reliable, useful output for your specific CRM and deal structure, save it with the correct CRM name and thresholds already filled in. A shared prompt library means every team member, from SDR to VP of Sales, has instant access to the same analytical capabilities.

Connect Windsor MCP to your email or Slack for daily sales summaries.

Instead of waiting for a weekly meeting to review the pipeline, set up a recurring prompt that generates a daily digest of new deals created, deals closed, stalled accounts, and activity gaps. Teams that review pipeline health daily catch slipping deals an average of 3–5 days earlier than teams that review weekly, which often makes the difference between a recoverable deal and a lost one.

Use loss reason analysis (Prompt 4) at the start of every quarter, not the end.

Most win/loss reviews happen after the quarter closes, when the learnings can’t change anything until next quarter. Running the analysis at the start of the new quarter, using last quarter’s closed-lost data, gives the team the pattern recognition they need before the next set of deals is already in progress.

Conclusion

Sales teams already have the data they need. The problem is that it is buried in CRM objects, reports, and spreadsheets that take too long to build and update.

Windsor MCP connects your CRM data directly to ChatGPT, Claude, and other AI tools so you can analyze pipeline health, forecast revenue, track rep performance, and find deals at risk in seconds.

Use the prompts in this guide as a starting point, then adapt them to fit your sales process, pipeline stages, and reporting needs.

🚀 Get started with Windsor.ai right now and unlock instant insights from your CRM data with the power of AI!

Tired of juggling fragmented data? Get started with Windsor.ai today to create a single source of truth

Let us help you automate data integration and AI-driven insights, so you can focus on what matters—growth strategy.
g logo
fb logo
big query data
youtube logo
power logo
looker logo