News & product updates

Got insights from this post? Give it a boost by sharing with others!

AI Prompt Library by Windsor.ai: Ready-Made Prompts by Data Source and Use Case

prompt library windsor.ai

With Windsor MCP, you can stream data from 325+ sources into your favorite AI chat or AI agent (ChatGPT, Claude, Cursor, Copilot, Gemini, Perplexity) and get insights or visual summaries in seconds.

From analyzing your cross-channel performance to optimizing budget allocation, you can ask any questions about your Windsor-integrated data to make smarter decisions.

In this AI prompt library, we’ve compiled a list of advanced, ready-to-use prompts focused on different data sources and use cases, so you can start analyzing your business data with AI right away. Just copy-paste a prompt into your AI chat, review the answer, and ask the LLM to convert it into the required output (visual report, table, Excel file, etc).

⚙️ How to configure Windsor MCP for different LLMs: Setup Guides.

Facebook (Meta) Ads prompts

Prompt 1: Create a weekly performance report

Prepare an easy-to-understand weekly Meta Ads performance overview for our client.

Ad account: [Ad Account Name]
Date range: [SPECIFY_DATE_RANGE]
Comparison period: previous matching period

Please follow these steps:

1. Retrieve the account data

Pull the following data from the account for the selected date range and compare it to the previous period:

- Overall account performance (spend, revenue, purchases, new customers)
- Campaign-level results for all active campaigns
- Breakdown by placement (Facebook Feed, Instagram Feed, Stories/Reels, Audience Network)
- Key audience segments (age, gender, country)
- Day-by-day trends for the main KPIs

2. Summarize KPIs

Pull these metrics at the account and campaign level:

- ROAS (revenue ÷ ad spend)
- Cost per Purchase (CPP)
- Customer Acquisition Cost (CAC) for new customers
- Conversion Rate (purchases ÷ clicks)
- Click-Through Rate (CTR) and Cost per 1,000 Impressions (CPM)
- Period-over-period percentage change for spend, revenue, purchases, and ROAS

3. Structure the output as a visual report with sections

Build a clear reporting dashboard for a client composed of these sections:

- Section 1: Key results at a glance
A simple “scorecard” with total spend, revenue, purchases, ROAS, and CAC, plus how each changed vs the previous period.

- Section 2: Main drivers of our results
Show which campaigns, placements, and platforms (Facebook vs Instagram) generated the most revenue and purchases.

- Section 3: Our target audience
Summarize performance by age, gender, and top countries. Highlight which segments delivered the strongest ROAS and the highest purchase volume.

- Section 4: Campaign winners and laggards
Compare active campaigns based on ROAS, purchases, spend share, and CAC. Mark the top 3 “growth drivers” and the 3 weakest campaigns from a revenue perspective.

- Section 5: How this week compares
Show trends over time (e.g., last 2 weeks) for spend, revenue, and ROAS, and determine whether this period was above, below, or in line with recent performance.

Presentation rules:

- Use “we” and “our” language to keep the tone collaborative (e.g., “Our campaigns generated…”, “We saw…”).
- Create a visually compelling dashboard with a contemporary design.
- Include simple, intuitive visual summaries (tables, charts, or bullet lists) for each section.
- Add short “What this means” boxes in plain language explaining the data for a non-technical audience.
- Clearly highlight the top 3 customer segments and 3 top campaigns by revenue or ROAS.
- For each highlighted campaign, show: spend, revenue, purchases, ROAS, CAC, and change vs previous period.
- Check whether this reporting period ranks among the top 3 weeks in the last 90 days for revenue or ROAS, and emphasize that if true.
- Do not include optimization recommendations or tactical media buying advice.

Make the output feel like a weekly performance story that a non-technical client can read and understand in a few minutes.

Sample output:

meta ads report claude

Prompt 2: Analyze the cost efficiency of your campaigns

Retrieve and structure cost performance metrics from our Meta Ads account for cost efficiency analysis.

Ad account: [Ad Account Name]
Date range: [SPECIFY_DATE_RANGE]

Required metrics (campaign-level):
- Total advertising spend
- Cost per result (aligned with each campaign's objective)
- Cost per click (CPC)
- Cost per 1,000 impressions (CPM)

Output format
Display results in a table with campaigns ranked from highest to lowest total spend.

Sample output:

ai prompt meta ads cost efficiency analysis

Prompt 3: Audit account for critical issues and opportunities

Audit my Meta Ads account and identify critical issues that need immediate resolution. Also, discover hidden growth opportunities.

Ad account: [Ad Account Name]
Date range: Last 30 days

Analyze and report on:

- Campaigns that are spending but generating zero conversions.
- Any week-over-week drops in performance greater than 20%.
- Campaigns nearing their budget limits while still delivering strong ROAS.
- Ads with high frequency (over 3) where CTR is trending down.

Also, suggest an action plan for the next week to fix the discovered problems and improve the results.

Output format
Summarize your findings in the visual report that includes campaign names and key metrics, ordered by highest estimated revenue impact first.

Sample output:

ai prompt meta ads account audit

Prompt 4: Identify the best-performing creative elements and messaging

Analyze how our creatives are performing in Meta Ads and help clarify what’s working and what’s not.

Ad account: [ACCOUNT_ID]
Date range: [START_DATE – END_DATE]

1. Gather essential creative-level data

Pull performance data at ad level for all currently active campaigns, including:

- Results by creative format (single image, video, carousel, collection, etc.)
- Performance by headline, primary text, and description
- Results broken down by CTA button (Shop Now, Learn More, Sign Up, etc.)

2. Summarize creative effectiveness indicators

For each creative and format, retrieve these KPIs:

- CTR by format and placement
- Conversion Rate per creative (purchases or leads divided by clicks)
- Cost per Result (e.g., cost per purchase, cost per lead)
- Engagement Rate (reactions, comments, saves, shares ÷ impressions)
- Video engagement curve, where applicable (drop-off points across the video)

3. Organize the analysis as a visual reporting dashboard

Structure the output into the following sections:

- Summary of creative performance
High-level view of how creative performance contributed to overall results in the selected period.

- Format-level comparison
Compare images vs videos vs carousels vs collections on CTR, Conversion Rate, Cost per Result, and Engagement Rate.

- Messaging & copy review
Evaluate headlines and primary text variants: which wording drives the most clicks, conversions, and engagement?

- Visual & asset insights
Identify which specific visuals (images or video concepts) perform best, and how they differ from underperforming assets.

- Fatigue check
Show how key metrics (CTR, Conversion Rate, Cost per Result) change over time for top-spend creatives and indicate where performance started to decline.

4. Report on the most important findings

Within the report, clearly highlight:

- Recurring patterns among top-performing creatives (e.g., hooks, formats, lengths, CTA types)
- Differences in performance between formats
- Emotional or thematic angles (testimonials, urgency, discounts, storytelling) that resonate most
- Creatives showing consistent decline in key metrics over time, indicating potential fatigue

Output rules:

- Do not include any suggestions about shifting or redistributing the budget. Only describe what the data shows.
- Use straightforward, non-technical language and emphasize business impact (sales, leads, cost per result) instead of ad-platform jargon.
- Present the results as if they were a visual dashboard with contemporary design: clear sections, logical flow, and tables / chart-style descriptions where helpful.
- Keep the focus on what is working and what is fading from a creative standpoint, not on bidding or campaign structure.

Sample output:

meta ads performance claude

Prompt 5: Detect creative fatigue

Spot ads that are showing signs of creative fatigue in our Meta Ads account.

Account: [ACCOUNT_ID]
Date range: Last 3 weeks

For all active ads running longer than one week, do the following:

- Track daily trends in CTR and conversion rate.
- Evaluate how rising frequency correlates with drops in performance.
- Pinpoint the moment (in days since launch) when results started to decline.
- Compare the latest 2-day average performance to the first 2-day average after launch.

Output format
Return a table of ads with more than a 20% decline in performance. Include the ad name, key metrics indicating creative fatigue, and the recommended number of days until the ad should be replaced.

Sample output:

ai prompt for creative fatigue meta ads

Google Analytics 4 prompts

Prompt 1: Identify your most profitable channels

Compare the main channels in my GA4 [Your Property ID] account driving the most conversions and revenue.

Sample output:

ai prompt for ga4 channel performance

Prompt 2: Identify your website conversion leaks

In GA4 for [Your Property ID], analyze how visitors enter the site, which pages they engage with, and how they move through the funnel.
Track page views, time on page, and scroll depth for key pages (landing pages, product pages, etc.).
Use Funnel Exploration to spot where users drop off in the checkout flow and review exit rates on cart and checkout pages. Recommend specific changes to reduce friction and improve completion rates.

Sample output:

ai prompt for ga4 funnel optimization

To get deeper insights, you should:

  1. Create Funnel Exploration in GA4 (Explore → Funnel Exploration)
  2. Set up custom events for specific page interactions
  3. Use Path Exploration to see actual user journeys
  4. Review User Explorer for individual session playback

Prompt 3: Discover your top customer segments

In GA4 for [Your Property ID], review audience data to find high performing demographic segments.
Segment by gender, location, and age to compare conversion rate and average order value.
Flag underperforming groups and use Audience Builder to create behavior based audiences (for example, users who viewed high value products but did not convert).
Identify untapped opportunities by geo or demographic and suggest new segments for paid and remarketing campaigns.

Sample output:

ai prompt for audience segmentation

Prompt 4: Reduce checkout abandonment (for e-commerce)

Make sure that e-commerce event tracking is properly implemented in your GA4 setup.

Using GA4 Funnel Exploration for <Your Property ID>, map the user journey from product page to cart, checkout, and purchase.
Measure exit and conversion rates at each stage and compare mobile vs desktop performance.
Based on the findings, recommend specific actions such as simplifying checkout, enabling guest checkout, or adding trust elements like security badges.

Sample output:

ai prompt for cart abandonment rate

Prompt 5: 24-hour traffic health check

In GA4 for [Your Property ID], generate a traffic overview for the last 24 hours.
Segment by source (Direct, Organic Search, Paid Search, Social, Referral) and report sessions, bounce rate, engaged sessions, pages per session, and average session duration.
Break results down by device (mobile vs desktop) and highlight sources or devices with unusually high or low engagement.

Sample output:

ai prompt for daily traffic overview ga4

Prompt 6: Analyze seasonality to plan traffic and revenue

For [Your Property ID], analyze traffic and revenue patterns over the last 2 years in GA4.
Identify seasonal peaks and dips to plan when to scale spend, adjust inventory or offers, and schedule promotions for maximum impact.

Sample output:

ai prompt for ga4 seasonality analysis

Prompt 7: Cut low-performing channels

In GA4 for [Your Property ID], review the last 45 days of traffic.
Report traffic, conversion rate, revenue, and ROAS by channel (Organic, Paid Search, Paid Social, Email).
Identify channels with high spend and low ROAS and recommend how to reallocate budget based on LTV and CAC insights.

Sample output:

ai prompt for roas analysis ga4

Google Ads prompts

Prompt 1: Weekly Google Ads analysis for e-commerce (campaigns, products, placements, and optimization insights)

Perform an in-depth analysis of the Google Ads account according to the rules below.

Account: [ACCOUNT_ID]
Date range: [Last 7 days] Imagine you are a senior Google Ads analyst with a focus on e-commerce accounts. Please complete the following: 1. Campaign overview Provide a short summary of the total: - Impressions - Clicks - Spend - Conversions - Conversion value - ROAS - CTR - CVR - CPA 2. Product-level performance Break down performance by product or product group (impressions, clicks, spend, conversions, ROAS). Identify: - Top 10 products by ROAS and conversion volume - Bottom 10 products by ROAS and conversion volume Flag products where ROAS or conversions changed more than ±15% week over week. 3. Placement analysis Compare spend and performance across key placements: Shopping, YouTube, Display, Discover, Gmail. Highlight: - Which placements generate the highest ROAS - Which placements are underperforming or inefficient 4. Audience and asset insights Identify audience segments that contribute the most to conversions and revenue. Call out underperforming asset groups or creatives with low CTR or wasted spend. 5. Spend and budget efficiency Check for pacing issues, budget caps, or poor spend allocation across campaigns and products. Note any campaigns limited by budget or overspending with low return. 6. Root cause diagnostics For any flagged products, audiences, or placements, assess likely causes such as: - Feed data or tracking issues - Increased auction competition - Creative fatigue or weak ad messaging - Misaligned targeting or bids 7. Recommendations Provide 3 to 5 clear, prioritized actions related to: - Bidding and bid strategies - Budget shifts between campaigns, products, or placements - Feed optimizations and product data improvements - Creative or asset refreshes Output format Deliver a structured Markdown report that includes: Executive summary with key metrics, trends, and notable changes Tables for: - Product-level performance (up to 20 rows) - Placement performance (up to 20 rows) - Bullet point insights and prioritized next steps Use human-friendly metrics (currency and percentages) and clear, scannable headings.

Sample output:

ai prompt google ads performance analysis

Prompt 2: Boost shopping campaign performance using auction insights

Review auction insights and impression share data for all active Shopping campaigns.

Account: [ACCOUNT_ID]
Date range: [SPECIFY_DATE_RANGE] Please complete the following: 1. Impression share by level For each campaign, ad group, and product group, report: - Impression Share (IS) - Lost Impression Share due to Budget (IS Lost Budget %) - Lost Impression Share due to Rank (IS Lost Rank %) - Average position or Top of page rate, where available Present this in a structured way, so it is easy to compare across entities. 2. Biggest impression share losses Highlight where total impression share loss is greater than 10%. Indicate whether the primary driver is: - Budget limitations, or - Rank-related issues (bids, quality, or competition). 3. Week over week shifts Identify any notable week-over-week changes in impression share metrics. Call out patterns that suggest: - Stronger competitive pressure - New competitors entering the auctions - Sudden drops due to budget or bid changes 4. Growth opportunities Pinpoint campaigns, ad groups, or product groups with high upside where: - Increasing the budget could unlock more volume at an acceptable ROAS - Improving bids or quality could gain more high-intent impressions - Focus on segments with solid performance but constrained visibility 5. Recommended actions Provide a prioritized set of actions, for example: - Increase budgets where impression share is being lost, mainly due to budget caps - Raise bids or improve quality signals where impression share loss is rank-driven - Restructure campaigns, ad groups, or product groups where a cleaner structure would help win more auctions efficiently. Prioritize based on expected revenue impact and feasibility. Output format Produce a clear, concise Markdown report that includes: Tables summarizing key impression share metrics at: - Campaign level - Ad group level - Product group level A bullet point summary of the main findings and underlying causes. A prioritized action plan tied to potential revenue impact. Format the report like a visual dashboard with clear section dividers. Use simple language and focus on business impact rather than technical jargon.

Sample output:

ai prompt for impression share analysis google ads

Prompt 3: Improve under-performing and double down on high-potential keywords

Perform an in-depth analysis of the Google Ads account according to the rules below.

Account: [ACCOUNT_ID]
Date range: [SPECIFY_DATE_RANGE] Imagine you're a senior Google Ads analyst focused on Search campaigns. Review all active Search campaigns and provide only actionable insights by these points: 1. Identify priority keywords and queries Report on keywords and search queries that meet at least one of these conditions: - Spend ≥ $X and ROAS < account average × 0.7 (potential wasted spend) - At least 2 conversions and ROAS ≥ account average × 1.3 (scalable growth opportunities) - Week over week CTR or CVR drop > 15% (candidates for ad or landing page optimization) - Spend with zero conversions (negative keyword candidates) 2. Recommend concrete optimizations For each keyword or query you surface, suggest specific actions such as: - Bid increases or decreases - Adding new exact or phrase match keywords - Adding negative keywords - Improving ad copy or testing new variants - Landing page changes that could improve conversion rate 3. Exclude stable performers Do not report on keywords or queries with stable or strong performance that do not require any change. Output format Provide a concise Markdown report that includes: - Summary tables with only the items that need action - Bullet-pointed, prioritized recommendations Use clear section headers and human-readable metrics (currency, percentages). Keep the focus on what should be improved or scaled, not on reporting overall performance.

Sample output:

keyword performance google ads ai prompt

Prompt 4: Google Ads account segmentation audit and restructuring plan

Imagine you are a senior Google Ads specialist. Review and analyze the account setup.

Account: [ACCOUNT_ID]
Date range: [SPECIFY_DATE_RANGE] Please: - Identify product groups that are too broad and may be hiding important performance differences. - Assess whether the current segmentation is granular and logical enough for effective optimization. - Propose restructuring options that improve bid control, testing capabilities, and reporting clarity. Return a Markdown report with your segmentation findings and clear, actionable restructuring recommendations.

Sample output:

ai prompt google ads product group segmentation

Instagram Insights prompts

Prompt 1: Weekly Instagram performance overview

Prepare a clear weekly performance overview of our Instagram account.

Account: [Your Instagram Account Name]
Date range: [SPECIFY_DATE_RANGE]
Comparison period: previous matching period

Please follow these steps:

1. Retrieve account-level data

Pull the following data for the selected date range and compare it to the previous period:

- Total reach and impressions
- Overall engagement (likes, comments, saves, and shares combined)
- Follower count change (net new followers gained or lost)
- Number of posts published, broken down by content type (single image, carousel, video/Reel, Story)
- Period-over-period percentage change for reach, engagement, and follower growth

2. Summarize content performance

For each content type published in the period, pull these KPIs:

- Average reach per post
- Average engagement rate (total interactions ÷ reach)
- Average saves per post
- Average shares per post
- Top 3 posts by engagement rate, with their content type and posting date

3. Structure the output as a visual report with sections

Build a clear weekly reporting summary with the following sections:

- Section 1: Key results at a glance
A scorecard showing total reach, impressions, engagement, and follower change, plus how each shifted versus the previous period.

- Section 2: What content worked best
Compare performance across content types. Highlight which format drove the most reach and which drove the most saves and shares.

- Section 3: Top posts of the week
Show the top 3 posts by engagement rate. For each, include the content type, date published, reach, and key engagement metrics.

- Section 4: Audience growth
Summarize follower gains and losses for the period. Flag whether growth accelerated, slowed, or reversed compared to the prior period.

- Section 5: Week-over-week trend
Show how reach, engagement rate, and follower growth have trended over the last 4 weeks. Indicate whether this week was above, below, or in line with recent performance.

Presentation rules:

- Use "we" and "our" language throughout (e.g., "Our reach grew by…", "We published…").
- Keep the tone clear and non-technical — suitable for a client or stakeholder who does not work in social media daily.
- Add a short "What this means" note under each section explaining what the numbers suggest in plain language.
- Do not include content scheduling recommendations or platform algorithm speculation.

Make the output feel like a concise weekly story that takes less than 5 minutes to read.

Prompt 2: Find your best-performing content format

Analyze how different content formats are performing on our Instagram account and identify which ones we should prioritize.

Account: [Your Instagram Account Name]
Date range: [SPECIFY_DATE_RANGE]

1. Pull format-level performance data

Retrieve performance data broken down by content type (single image, carousel, video/Reel, Story) for all posts published in the selected period.

For each format, calculate:

- Average reach per post
- Average impressions per post
- Average engagement rate (total interactions ÷ reach)
- Average saves per post
- Average shares per post
- Total number of posts published in this format

2. Compare formats on what matters most

Rank the formats by each of the following dimensions separately:

- Reach efficiency (which format gets seen by the most unique accounts per post)
- Engagement depth (which format generates the most saves and shares, indicating high-value content)
- Volume of interaction (which format gets the most total likes and comments)

3. Identify standout posts within each format

For each content type, surface the single best-performing post in the period based on engagement rate.
Include its reach, engagement rate, saves, and shares.

4. Output format

Return a comparison table with one row per content format and columns for all key metrics listed above.

Below the table, write a short plain-language summary (3–5 sentences) identifying:
- The format that currently delivers the best organic reach
- The format that drives the most saves and shares (a signal of content quality)
- Any format that appears underperforming relative to the effort required

Prompt 3: Understand your audience and follower growth

Analyze our Instagram audience data to understand who follows us, how the audience is growing, and whether recent content is attracting the right people.

Account: [Your Instagram Account Name]
Date range: [SPECIFY_DATE_RANGE]

1. Audience composition

Pull the current follower breakdown by:

- Age group
- Gender
- Top 5 countries and top 5 cities by follower volume

Show each breakdown as a percentage of total followers.

2. Follower growth over time

Retrieve week-by-week follower gains and losses for the selected period.

Identify:
- The week with the highest net follower growth
- Any week where follower loss exceeded gain
- The overall growth rate for the period (percentage change from start to end)

3. Content-to-growth correlation

Cross-reference the weeks or days with the highest follower growth against the posts published in that window.

Flag which posts or content types appear to correlate with follower spikes.

4. Output format

Produce a short visual report with three sections:

- Audience snapshot: A table showing the current demographic breakdown.
- Growth trend: A week-by-week table of follower gains, losses, and net change.
- Growth drivers: A plain-language paragraph identifying which content types or topics appear to be most effective at attracting new followers, based on the timing of growth spikes.

Keep the language accessible and avoid technical jargon. The goal is to help the team understand who the audience is today and what is drawing new people to the account.

Prompt 4: Audit Story performance and viewer retention

Analyze the performance of our recent Instagram Stories to identify where viewers are dropping off and which Story sequences hold attention best.

Account: [Your Instagram Account Name]
Date range: Last 30 days

1. Retrieve Story-level data

Pull performance data for all Stories published in the selected period.

For each Story (or Story sequence), retrieve:

- Reach (unique accounts that saw at least one frame)
- Impressions (total views including replays)
- Exits per frame (how many viewers left at each point)
- Forward taps (viewers skipping to the next frame)
- Backward taps (viewers replaying a frame)
- Completion rate, where available (viewers who watched all frames ÷ viewers who started)

2. Identify retention patterns

Using the exit and tap data, flag:

- The specific frame position (e.g., frame 3 of 7) where exits are highest across multi-frame Stories
- Story sequences where backward taps are unusually high, suggesting a frame that is being re-watched
- Stories with a completion rate above 60% — these are your strongest-performing sequences

3. Compare Stories by topic or content type

Group Stories into broad categories if the data allows (e.g., promotional, educational, behind-the-scenes, polls/interactive).
For each category, show the average exit rate and average completion rate.

4. Output format

Return a table of all Stories with their key retention metrics.

Below the table, write a 3–5 sentence summary highlighting:
- The typical drop-off point across our Story sequences
- The content category or frame style that retains viewers best
- One specific recommendation based purely on the data (e.g., "Sequences longer than 5 frames show significantly higher exits at frame 4")

Prompt 5: Identify the best days and times to post

Use our historical Instagram performance data to identify the posting windows that consistently deliver the highest reach and engagement.

Account: [Your Instagram Account Name]
Date range: Last 90 days

1. Retrieve post-level performance data

Pull all posts published in the selected period with:

- Date and time of publication
- Content type (image, carousel, video/Reel)
- Reach within the first 24 hours of publishing
- Total engagement within the first 24 hours (likes + comments + saves + shares)
- Engagement rate (engagement ÷ reach)

2. Segment by day of week and time block

Group posts into the following time blocks:
- Morning (6:00–10:00)
- Midday (10:00–14:00)
- Afternoon (14:00–18:00)
- Evening (18:00–22:00)

For each day-of-week and time-block combination, calculate:
- Average reach per post
- Average engagement rate per post
- Number of posts published (to ensure statistical relevance — flag any cells with fewer than 3 posts)

3. Surface the top windows

Identify the top 3 day + time combinations by average reach and the top 3 by average engagement rate.
Note whether these overlap or differ — a window with high reach but lower engagement rate may attract passive viewers, while a window with high engagement rate may serve a more active audience.

4. Output format

Return a day-of-week × time-block heatmap table, with cells showing average engagement rate.
Highlight the top 3 cells in plain text (e.g., **Tuesday Evening** — avg. engagement rate 4.2%).

Below the table, write a 2–3 sentence plain-language summary of the findings.

Note: Flag any content types where the optimal posting window differs significantly from the account average (e.g., Reels may peak on different days than static posts).

Prompt 6: Audit content to find low-effort, high-return posts

Review our last 90 days of Instagram content and identify which posts delivered outsized results relative to how frequently we publish that format — and which formats are consistently underperforming.

Account: [Your Instagram Account Name]
Date range: Last 90 days

1. Pull post-level performance data

Retrieve all posts published in the selected period with the following metrics:

- Content type (image, carousel, video/Reel)
- Reach
- Engagement rate (total interactions ÷ reach)
- Saves
- Shares
- Profile visits attributed to the post (if available)
- Follower growth on the day of publishing

2. Calculate format efficiency

For each content type, calculate:

- Average engagement rate
- Average saves per post
- Average shares per post
- Standard deviation in engagement rate (to show how consistent or variable the format is)

A format with a high average but high variability is harder to replicate. A format with a moderate average but low variability is more reliable.

3. Flag outlier posts

Identify posts that performed more than 1.5× above their format's average engagement rate.
For each outlier, note the content type, day of week, and key metrics.
Look for patterns: Do the outliers cluster around specific topics, posting days, or caption styles?

4. Identify underperforming formats

Flag any content type where the average engagement rate has declined more than 15% over the last 30 days compared to the 60 days before that.

5. Output format

Return:
- A format efficiency summary table (one row per content type, with average engagement rate, saves, shares, and variability)
- A list of top 5 outlier posts with their metrics and any observable patterns
- A plain-language conclusion (3–4 sentences) identifying the most reliable format, the most variable format, and any format showing a declining trend

Prompt 7: Generate a monthly content plan

Analyze our Instagram performance data from the last 90 days and use it to generate a full monthly content calendar for the coming month — covering both feed posts and Stories.

Account: [Your Instagram Account Name]
Analysis period: Last 90 days
Planning period: [NEXT MONTH — e.g., May 2026]

PART 1: PERFORMANCE ANALYSIS (what the data tells us)

1. Identify top-performing feed posts

Pull all feed posts from the last 90 days and rank them by engagement rate (total interactions ÷ reach).

For the top 10 posts, retrieve:
- Content type (single image, carousel, video/Reel)
- Day and time of posting
- Reach, engagement rate, saves, and shares
- Caption length (short / medium / long) if inferable from the data

Group the top 10 into content themes or patterns if they share common characteristics (e.g., educational, behind-the-scenes, product showcase, social proof).

2. Identify top-performing Stories

Pull all Stories from the last 90 days and rank them by completion rate (viewers who watched all frames ÷ viewers who started).

For the top 10 Story sequences, retrieve:
- Number of frames
- Reach
- Exit rate per sequence
- Tap-back rate (a high tap-back rate indicates a frame that was re-watched)

Identify which Story types retain the most viewers (e.g., single strong visual, multi-frame sequences, interactive frames with polls or questions).

3. Optimal posting patterns

From all posts in the period, identify:
- The 2 best-performing days of the week by average engagement rate
- The best-performing time block (morning / midday / afternoon / evening)
- The content type with the best average reach

PART 2: CONTENT CALENDAR GENERATION

Using the performance analysis above, generate a full monthly content calendar for [NEXT MONTH].

Calendar structure:

- Aim for [X posts per week — specify your preferred cadence, e.g., 4–5]
- Aim for [Y Story sequences per week — e.g., 3–4]
- Schedule posts on the days and times that performed best in the analysis
- Balance the calendar between the content themes and formats that drove the most reach and saves

For each planned feed post, specify:
- Suggested publish date and time
- Recommended format (single image / carousel / Reel) — based on what performed best for this content type
- Content theme and suggested message angle (e.g., "Educational carousel: 5 things our audience asked most about [topic] this month")
- Design direction (e.g., "Bold text overlay on product image", "Real-person photo, minimal text", "Talking-head Reel, 15–30 seconds")
- Suggested caption tone (conversational / informative / storytelling)
- Recommended CTA (save, share, comment, link in bio)

For each planned Story sequence, specify:
- Suggested publish date
- Recommended number of frames (based on the completion rate data — if shorter sequences performed better, reflect that)
- Story concept and frame-by-frame outline (e.g., Frame 1: Hook question — Frame 2: Short answer — Frame 3: Swipe-up or poll)
- Interaction element recommended (poll, question sticker, countdown, link sticker)

PART 3: CALENDAR SUMMARY

Present the final calendar as a clean table with columns:
Date | Type (Post / Story) | Format | Theme | Message Angle | Design Direction | CTA

Below the table, write a short rationale (3–4 sentences) explaining the strategic logic behind the calendar — which content types are being prioritized and why, based on the performance data.

Output rules:
- Base all format and theme decisions on actual performance patterns from Part 1
- Do not invent content ideas that contradict what the data shows performs well
- Keep message angles and design directions brief but specific enough to be actionable for a content creator
- Flag any weeks where the recommended cadence may need adjustment based on seasonal relevance

Facebook Pages (Organic) prompts

Prompt 1: Weekly performance overview

Prepare a weekly organic performance summary for our Facebook Page.

Page: [Your Facebook Page Name]
Date range: [SPECIFY_DATE_RANGE]
Comparison period: previous matching period

1. Retrieve page-level data

Pull the following for the selected period and compare to the prior period:
- Total reach (unique accounts reached) and impressions
- Overall page engagement (reactions, comments, shares, clicks combined)
- Net follower change (new followers minus unfollows)
- Number of posts published, broken down by format (photo, video, Reel, link, text)
- Period-over-period percentage change for reach, engagement, and follower count

2. Summarize post-level performance

For all posts published in the period, calculate:

- Average reach per post by format
- Average engagement rate by format (total interactions ÷ reach)
- Average link clicks per post (for link posts only)
- Top 3 posts by engagement rate, with format, date, and key metrics

3. Structure the output as a visual report with these sections

- Section 1: Page scorecard
Total reach, impressions, engagement, and follower change with period-over-period delta for each.

- Section 2: Format comparison
Side-by-side comparison of photos, videos, Reels, and link posts on reach and engagement rate. Identify the format with the widest organic distribution.

- Section 3: Audience reaction breakdown
Show the split of reaction types (positive vs. other reactions) across all posts. Flag any posts that received a notably different reaction mix.

- Section 4: Top posts
Top 3 posts by engagement rate with their key metrics and format.

- Section 5: Follower trend
Week-by-week net follower change for the last 4 weeks. Flag whether this week's growth rate was above or below the recent average.

Presentation rules:
- Use "we" and "our" language throughout.
- Add a short "What this means" note under each section in plain language.
- Do not include recommendations about posting strategy or paid amplification.

Prompt 2: Identify your best organic content format

Analyze content format performance across our Facebook Page to identify which formats earn the most organic reach and meaningful engagement.

Page: [Your Facebook Page Name]
Date range: Last 60 days

1. Pull format-level data

Retrieve all posts published in the period and group by format: photo, video, Reel, link post, and text/status.

For each format, calculate:

- Average organic reach per post
- Average engagement rate (total reactions + comments + shares ÷ reach)
- Average link clicks per post (where applicable)
- Total posts published in this format
- Consistency score: standard deviation in engagement rate (lower = more predictable)

2. Video and Reel retention

For all video posts and Reels, retrieve:

- Average percentage of the video watched per viewer
- Total complete views (viewers who watched to the end) as a share of total views
- Share count per Reel vs. per regular video post

3. Identify standout posts per format

For each format, surface the single best-performing post by engagement rate.
Include its reach, reactions, shares, and any click data.

4. Output format

Return a comparison table with one row per content format.
Below the table, write a 3–5 sentence plain-language summary identifying:
- The format that currently reaches the most unique people per post
- The format that generates the most engaged reactions (saves, shares, comments)
- Any format where average watch time or completion rate signals strong content resonance
- Any format consistently underperforming on both reach and engagement

Prompt 3: Diagnose a drop in organic reach

Investigate a recent drop in our Facebook Page's organic reach and identify the likely causes.

Page: [Your Facebook Page Name]
Date range: Last 6 weeks (retrieve week-by-week data)

1. Map the reach timeline

Pull weekly organic reach and impressions for the full 6-week window.

Identify:
- The specific week where reach dropped more than 15% from the prior week
- Whether impressions dropped proportionally or diverged from reach (a gap may signal a change in content distribution type)

2. Cross-reference with posting behavior

For the same 6-week window, retrieve:
- Number of posts published per week by format
- Average engagement rate per week
- Any weeks where engagement rate increased while reach dropped (or vice versa)

3. Audience and follower signals

Pull weekly net follower change alongside the reach trend.
Flag whether follower losses correlate with the reach drop or whether the audience size stayed stable.

4. React type and share trends

Review whether the share of positive reactions (Love, Haha, Wow) versus other reactions shifted in the weeks around the drop.
A drop in share count and positive reactions often precedes a sustained reach decline.

5. Output format

Return:
- A week-by-week table with reach, impressions, posts published, average engagement rate, and net followers
- A plain-language diagnosis (4–6 sentences) identifying the most likely cause based on the data patterns
- A flag for any single post or week where performance was clearly anomalous

Prompt 4: Audience demographics and growth analysis

Analyze who our Facebook Page audience is today and how audience composition has shifted over the selected period.

Page: [Your Facebook Page Name]
Date range: [SPECIFY_DATE_RANGE]

1. Current audience profile

Pull the current follower breakdown by:
- Age group
- Gender
- Top 5 countries and top 5 cities by follower volume

Show each as a percentage of total followers.

2. Audience growth trend

Retrieve week-by-week new followers, unfollows, and net change for the period.

Identify:
- The week with the highest net follower gain
- Any week where unfollows exceeded new follows
- Whether the audience grew, shrank, or stayed flat overall

3. Reach vs. follower ratio

Calculate the average percentage of our total follower base reached by each post during the period.

Flag whether this ratio is improving or declining week over week — a declining ratio with stable follower count often signals reduced organic distribution.

4. Output format

Return:
- A demographic breakdown table (age, gender, top locations)
- A week-by-week follower trend table
- A 3–4 sentence summary highlighting which audience segments are growing fastest and whether reach efficiency relative to follower count is improving

Prompt 5: Find the best time to post

Use our historical Facebook Page performance data to identify which days and times consistently deliver the highest organic reach and engagement.

Page: [Your Facebook Page Name]
Date range: Last 90 days

1. Pull post-level timing data

Retrieve all posts published in the period with:
- Day of week and hour of publication
- Organic reach within 24 hours of publishing
- Total engagement within 24 hours (reactions + comments + shares + clicks)
- Engagement rate (engagement ÷ reach)

2. Group by day and time block

Segment posts into time blocks:
- Morning (06:00–10:00)
- Midday (10:00–14:00)
- Afternoon (14:00–18:00)
- Evening (18:00–22:00)

For each day × time-block combination, calculate:
- Average 24-hour reach per post
- Average engagement rate per post
- Number of posts published (flag any cell with fewer than 3 posts as low-confidence)

3. Surface the top windows

Identify the top 3 day + time combinations by reach and the top 3 by engagement rate. Note where these overlap and where they differ.

4. Format-level differences

If data allows, check whether video/Reel posts peak at a different time than photo or link posts.

5. Output format

Return a day-of-week × time-block heatmap table with average engagement rate in each cell.
Highlight the top 3 cells.
Below the table, write a 2–3 sentence plain-language summary of the findings.

Prompt 6: Paid vs. organic performance comparison

Compare the organic and boosted (paid) performance of our Facebook Page posts to understand where organic content is strong and where paid amplification is adding real value.

Facebook Page: [Your Facebook Page Name]
Meta Ads account id: [Your Meta Ads account id]
Date range: Last 30 days

Note: This analysis requires both the Facebook Page Insights (organic) and Meta Ads data sources to be connected in Windsor.

1. Identify boosted posts

From the organic data, flag all posts that were also run as paid ads during the period.
For these posts, retrieve both the organic and paid reach, engagement, and click metrics separately.

2. Calculate the lift

For each boosted post, calculate:
- Paid reach added on top of organic reach (absolute and percentage increase)
- Organic engagement rate vs. total engagement rate (organic + paid combined)
- Cost per engagement for the paid portion

3. Organic-only posts

For posts that were not boosted, identify the top 5 by organic reach and engagement rate.
These are candidates for future boosting based purely on organic signal strength.

4. Output format

Return two tables:
- Table 1: Boosted posts — showing organic reach, paid reach, organic ER, total ER, and cost per engagement
- Table 2: Top organic-only posts — showing reach, engagement rate, and format

Below the tables, write a 3–4 sentence summary answering:
- Are boosted posts generating proportionally higher engagement, or is paid reach largely passive?
- Which organic posts showed strong enough signal to justify paid amplification?

Prompt 7: Build an engaging monthly content plan

Review the last 90 days of our Facebook Page performance and use the insights to produce a structured monthly publishing plan for both feed posts and Stories/Reels.

Page: [Your Facebook Page Name]
Analysis period: Last 90 days
Planning period: [TARGET MONTH — e.g., May 2025]

PART 1: WHAT THE DATA SHOWS

1. Top-performing post formats and themes

Pull all posts published in the analysis period and rank by engagement rate (reactions + comments + shares + clicks ÷ reach).

For the top 10 posts, retrieve:
- Format (photo, video, Reel, link post, text)
- Day and time of publishing
- Reach, engagement rate, share count, and link click count (where applicable)

Identify recurring characteristics among the top posts: Are they short or long videos? Emotional or informational content? Posts with strong visual content vs. text-heavy posts? Shared content vs. original content?

2. Video and Reel retention signals

For all video and Reel posts in the period, retrieve:
- Average percentage of the video watched
- Complete view rate
- Share count per post

Identify the video length and content style that retains viewers best.

3. Reaction quality breakdown

For the top 10 posts, show the breakdown of reaction types (Love, Haha, Wow, etc.).
Posts that generate Love or Wow reactions tend to reflect stronger emotional resonance than posts that only receive standard Likes.

4. Best posting windows

From all posts in the period, identify:
- The top 2 days of the week by average organic reach
- The top 2 days by average engagement rate
- The time block (morning / midday / afternoon / evening) with the strongest consistent performance

PART 2: MONTHLY PUBLISHING PLAN

Using the findings from Part 1, generate a structured monthly publishing calendar for [TARGET MONTH].

Publishing targets:
- Feed posts: [Specify your preferred weekly cadence — e.g., 3–4 per week]
- Reels or video posts: [Specify — e.g., 1–2 per week]
- Facebook Stories: [Specify — e.g., 2–3 sequences per week]

For each planned feed post, include:
- Suggested publish date and time (aligned with best-performing windows)
- Format recommendation (photo / video / Reel / link / text) — based on what drove the most reach and engagement for this content type
- Content angle (a one-sentence description of the post idea, grounded in themes that performed well — e.g., "Reaction post: share an audience result or testimonial with a strong visual")
- Tone guidance (informative / entertaining / conversational / inspirational)
- Suggested CTA (react, share, comment with a question, click through)

For each planned Story sequence, include:
- Suggested publish date
- Frame count recommendation (based on completion rate data — shorter if retention was weak)
- Story concept with a brief frame-by-frame outline
- One engagement mechanic (poll, question, swipe, countdown)

PART 3: PLAN SUMMARY TABLE

Present the full calendar as a table with columns:
Date | Type (Post / Story / Reel) | Format | Content Angle | Tone | CTA

Below the table, include a 3–4 sentence strategic note explaining:
- Which content types the plan leans into and why
- Any themes or formats the data suggests reducing
- Whether the plan reflects a shift from recent posting patterns based on performance evidence

Output rules:
- All recommendations must be traceable to actual performance patterns from Part 1
- Avoid generic content ideas — tie each suggestion to a specific signal from the data (e.g., "Reels with direct-to-camera openings outperformed text-overlay Reels by 2× on shares")
- Keep creative directions brief but specific enough for a content creator to act on without additional briefing

Google Search Console prompts

Prompt 1: Find your quickest-win ranking opportunities

Analyze our Google Search Console data to identify queries and pages with the best short-term potential for ranking improvement.

Property: [Your GSC Property URL]
Date range: Last 30 days

1. Pull query and page performance data

Retrieve all queries where:
- Average position is between 4 and 20 (already ranking, not yet prominent)
- Impressions are above a meaningful threshold (suggest 150+ to filter out noise)
- Clicks are low relative to impression volume (CTR below 3%)

For each query, include: the query text, the ranking page URL, average position, impressions, clicks, and CTR.

2. Prioritize by opportunity size

Rank the results by a simple opportunity score: impressions × (expected CTR at position 3 − current CTR).
This highlights queries where a modest position improvement would produce the largest traffic gain.

3. Segment by intent type

Group the qualifying queries into broad intent categories based on the query language:
- Informational (how, what, why, guide, tips)
- Commercial (best, vs, review, compare)
- Transactional (buy, price, order, book)

Identify which intent category has the most untapped potential in positions 4–20.

4. Output format

Return a ranked table of the top 15 opportunity queries with: query, page URL, position, impressions, CTR, and estimated monthly click gain from moving to position 3.
Below the table, write a 2–3 sentence plain-language summary of the dominant opportunity themes.

Prompt 2: Weekly SEO health check

Run a quick weekly health check on our organic search performance and flag any significant changes.

Property: [Your GSC Property URL]
Date range: Last 7 days vs. previous 7 days

1. Top-level metrics comparison

Pull total clicks, impressions, average CTR, and average position for both periods.
Calculate the percentage change for each metric.

2. Identify the biggest movers

Surface the top 5 queries or pages where:
- Clicks increased more than 20% week over week (positive signals to build on)
- Clicks decreased more than 20% week over week (issues to investigate)
- Average position dropped more than 3 positions (potential ranking losses)

3. Device performance

Break down clicks, impressions, and average position by device type (desktop, mobile, tablet) for both periods.
Flag any device type where the week-over-week shift is more than 10%.

4. Search type breakdown

Split performance by search type (Web, Image, Video, Discover, News) and flag any type with a notable shift in clicks or impressions.

5. Output format

Return:
- A top-level KPI comparison table (current vs. prior week with % change)
- A movers table showing the top 5 gainers and top 5 decliners by clicks
- A device breakdown table
- A 2–3 sentence plain-language flag summary highlighting the most important change to investigate

Prompt 3: Content decay audit

Identify pages on our site where organic search performance has meaningfully declined over the past year, indicating content that may need refreshing.

Property: [Your GSC Property URL]
Date range: Compare last month vs. the previous month
1. Page-level comparison

For all pages that had at least 10 clicks in the earlier period, retrieve:
- Clicks in both periods
- Impressions in both periods
- Average position in both periods
- CTR in both periods

2. Flag declining pages

Identify pages where:
- Clicks dropped more than 25%
- Average position fell more than 5 places

Sort results by absolute click loss (largest loss first), as these represent the highest-impact content decay cases.

3. Diagnose the pattern

For each flagged page, assess the likely driver of decline:
- Position dropped significantly → ranking loss, likely needs content or authority update
- Impressions dropped while position held → lower search demand for the topic (seasonal or trend-driven)
- Position held but CTR dropped → title or meta description may need refreshing

4. Output format

Return a table of flagged pages with: URL, clicks (both periods), impressions (both periods), position change, CTR change, and a one-word diagnosis (Ranking / Demand / CTR).
Prioritize by absolute click loss.
Below the table, write a 3–4 sentence summary identifying the dominant decay pattern across the site.

Prompt 4: Keyword cannibalization check

Identify cases in our Google Search Console data where multiple pages are competing for the same query, splitting ranking potential and reducing overall performance.

Property: [Your GSC Property URL]
Date range: Last 60 days

1. Find multi-page queries

Pull all queries where two or more distinct page URLs are receiving impressions.

For each query with multiple ranking URLs, retrieve:
- Each page URL ranking for the query
- Average position per URL for that query
- Impressions per URL for that query
- Clicks per URL for that query

2. Score the cannibalization risk

Flag query-page pairs where:
- Two URLs are within 5 positions of each other for the same query
- Combined impressions for the query are above 100 (worth fixing)

Higher combined impression volume = higher priority to resolve.

3. Identify the preferred URL

For each flagged query, indicate which page currently holds the stronger position and which appears to be the weaker/duplicate.

4. Output format

Return a table of all flagged query-URL pairs, sorted by combined impression volume. Columns: query, URL 1, URL 1 position, URL 2, URL 2 position, combined impressions, suggested preferred URL.
Below the table, write a 2–3 sentence summary of how widespread the issue is and which content themes show the most cannibalization.

Prompt 5: Mobile vs. desktop performance gap

Compare how our site performs in Google Search for mobile users versus desktop users, and identify pages where the gap is large enough to warrant action.

Property: [Your GSC Property URL]
Date range: Last 30 days

1. Overall device split

Pull total clicks, impressions, average CTR, and average position separately for desktop and mobile.
Calculate the ratio of mobile clicks to desktop clicks as a share of total traffic.

2. Page-level gaps

For all pages with at least 100 total impressions, compare mobile vs. desktop position and CTR.

Flag pages where:
- Mobile position is more than 5 places lower than desktop position (mobile ranking underperformance)
- Mobile CTR is more than 30% lower than desktop CTR at a similar position (mobile snippet or UX issue)

3. Query-level gaps

Surface the top 10 queries by impression volume where mobile position is significantly worse than desktop position.
These are the highest-traffic queries with the clearest mobile optimization gap.

4. Output format

Return:
- A top-level device comparison table (clicks, impressions, CTR, avg position for desktop vs. mobile)
- A page-level gap table sorted by the size of the mobile vs. desktop position difference
- A 3-sentence summary flagging the most important mobile-specific issues and which page types show the largest gap

Prompt 6: Branded vs. non-branded traffic split

Separate branded from non-branded organic search performance in our Google Search Console data to understand how reliant we are on brand recognition versus genuine SEO strength.

Property: [Your GSC Property URL]
Brand terms to include: [LIST YOUR BRAND NAME AND COMMON VARIATIONS]
Date range: Last 30 days

1. Split all queries

Classify every query in the dataset into:
- Branded: contains the brand name or common brand variations
- Non-branded: everything else

For each group, calculate total clicks, total impressions, average CTR, and average position.

2. Trend over time

Pull monthly totals for branded and non-branded clicks over the last 6 months.
Show whether non-branded traffic is growing, stable, or shrinking as a share of total organic clicks.

3. Non-branded opportunity

Within the non-branded group, identify the top 10 queries by impressions where position is above 10. These represent topics where the site has some visibility but is not yet earning significant traffic.

4. Output format

Return:
- A summary table comparing branded vs. non-branded on all top-level metrics
- A month-by-month trend table showing the non-branded share of total clicks
- A table of top 10 non-branded opportunity queries
- A 2–3 sentence plain-language conclusion on how dependent the site is on brand recognition for its organic traffic

Prompt 7: Suggest a content plan from high-value unranked queries

Use our Google Search Console data to identify high-potential search queries where our site has no strong ranking page, and turn those gaps into a prioritized brief for new content creation.

Property: [Your GSC Property URL]
Date range: Last 30 days

PART 1: IDENTIFY CONTENT GAPS

1. Find high-impression, weak-ranking queries

Pull all queries where:
- Impressions are above [specify a meaningful threshold — e.g., 200] in the period
- Average position is above 20 (we are either barely ranking or Google is showing us for this query with a poorly matched page)
- Clicks are fewer than 10 (we are not capturing meaningful traffic despite some visibility)

These queries represent genuine search demand that our site is failing to address.

2. Find queries with no well-matched page

From the data, retrieve the page URL that currently ranks for each query.

Flag queries where:
- The ranking page is clearly not a dedicated match for the query (e.g., the homepage or a category page is ranking for a specific informational query)
- Multiple weak pages are competing for the same query (cannibalization signal)

These are the clearest cases where a new, purpose-built page would likely outperform the current default.

3. Cluster queries into content topics

Group the qualifying queries into thematic clusters — queries about the same topic or user need that could be addressed by a single well-structured page.

For each cluster, calculate:
- Total impressions across all queries in the cluster
- Average position across the cluster
- Estimated traffic potential if the cluster page ranked in position 3–5 (use a 5–10% CTR estimate)

PART 2: PRIORITIZE THE CONTENT OPPORTUNITIES

Rank the topic clusters by a simple priority score based on:
1. Total impression volume (higher = more demand)
2. Average position (closer to position 20 = closer to ranking, quicker win)
3. Commercial relevance (use query language to infer intent: transactional and commercial queries score higher than purely informational ones)

Select the top 8–10 clusters to brief as new content pieces.

PART 3: CONTENT BRIEF GENERATION

For each of the top clusters, produce a content brief containing:

- Proposed page title (optimized for the primary query in the cluster)
- Target queries (list the 3–5 most impression-rich queries the page should rank for)
- Search intent summary (1–2 sentences: what is the user looking for when they type these queries? What stage of the decision journey are they at?)
- Suggested content format (long-form guide / comparison page / FAQ page / tool or calculator / product/service landing page)
- Recommended page structure (a brief outline: 4–6 section headings the page should cover to fully address the user's intent)
- Internal linking opportunity (which existing high-performing page on our site is most relevant to link from?)
- Estimated monthly traffic potential if ranked at position 3–5

PART 4: OUTPUT FORMAT

Return:
- A gap analysis table showing the top clusters with their total impressions, average position, and estimated traffic potential
- A content brief for each of the top 8–10 clusters in a clear, structured format
- A 3–4 sentence editorial summary explaining the dominant content themes in the gap list and whether the opportunities lean more informational, commercial, or transactional — which should inform where to start

Output rules:
- All briefs must be grounded in actual query data from the GSC account
- Do not suggest content topics that already have a well-ranking page (position 1–10 with solid CTR)
- Keep the briefs specific enough that a content writer could begin outlining the page without additional research

TikTok Ads prompts

Prompt 1: Weekly TikTok Ads performance report

Prepare a clear weekly performance overview of our TikTok Ads account.

Account: [Your TikTok Ads Account Name]
Date range: [SPECIFY_DATE_RANGE]
Comparison period: previous matching period

1. Retrieve account-level data

Pull total spend, impressions, clicks, conversions, and revenue for the selected period.
Calculate CTR, CPC, CPM, CPA, and ROAS.
Show period-over-period percentage change for each KPI.

2. Campaign-level breakdown

For all active campaigns, retrieve:
- Spend
- Impressions and reach
- Clicks and CTR
- Conversions and CPA
- ROAS (where conversion value is tracked)

Rank campaigns from highest to lowest ROAS.

3. Placement and format breakdown

Split performance by:
- Ad placement type (In-Feed, TopView, Spark Ads vs. standard)
- Ad format (video, collection, carousel)

Highlight which placement and format combination delivers the best CPA and ROAS.

4. Audience segment overview

Break down conversions and spend by the top audience dimensions available (age group, gender, and interest category where available).

5. Structure the output as a visual report with these sections

- Section 1: Account scorecard — spend, ROAS, CPA, CTR, CPM with period-over-period changes
- Section 2: Campaign rankings — table ordered by ROAS
- Section 3: Placement & format insights — which combination is most efficient
- Section 4: Audience highlights — top 3 segments by conversion volume and by CPA
- Section 5: Week-over-week trend — 4-week trend for spend, ROAS, and CPA

Presentation rules:
- Use "we" and "our" language throughout.
- Keep language non-technical and results-focused.
- Add a "What this means" note under each section.
- Include creative production recommendations.

Prompt 2: Creative performance and video retention analysis

Analyze how our TikTok Ads creatives are performing, with a focus on video retention and identifying which creative elements are driving results.

Account: [Your TikTok Ads Account Name]
Date range: [SPECIFY_DATE_RANGE]

1. Pull ad-level creative performance data

For all active ads, retrieve:
- Impressions, clicks, CTR
- Video views, average video watch time, and video completion rate
- Conversions and CPA
- Engagement actions (likes, comments, shares, follows from ad)

2. Video retention analysis

For all video ads, calculate:
- The percentage of viewers who watched more than 50% of the video
- The percentage of viewers who watched to completion
- The ratio of watch time to video length (attention efficiency)

Flag ads where the completion rate is significantly below the account average — these are candidates for creative refresh.

3. Hook effectiveness

Identify the ads with the highest 2-second and 6-second view rates (early retention).
Compare their CTR and CPA to ads with lower early retention.
A pattern of high early retention + low CPA signals an effective hook formula.

4. Spark Ads vs. standard ads

If Spark Ads are running, compare their average CTR, completion rate, and CPA against standard in-feed ads.

5. Output format

Return:
- An ad-level performance table ranked by completion rate, with columns for all key metrics
- A top 5 / bottom 5 list by completion rate with a note on what distinguishes the two groups
- A plain-language summary (3–5 sentences) identifying the creative pattern that appears to perform best and any ads showing early signs of fatigue (declining watch time or rising CPA)

Prompt 3: Audience targeting efficiency analysis

Analyze how different audience segments are converting in our TikTok Ads account to identify where spend is most and least efficient.

Account: [Your TikTok Ads Account Name]
Date range: Last 30 days

1. Break down performance by audience dimensions

Retrieve spend, impressions, clicks, conversions, and CPA segmented by:
- Age group
- Gender
- Interest category (where available)
- Device type

2. Identify high-efficiency segments

Flag audience segments where CPA is more than 20% below the account average. These are the segments delivering disproportionate value.

3. Identify low-efficiency segments

Flag audience segments where:
- Spend is more than 5% of total account spend, AND
- CPA is more than 30% above the account average

These represent the clearest budget reallocation opportunities.

4. Frequency and saturation check

For each major audience segment, retrieve average frequency (impressions ÷ estimated reach).

Flag any segment where frequency is above 4 and CPA is trending upward — this may indicate audience saturation.

5. Output format

Return:
- A segmentation table with rows for each audience dimension breakdown, showing spend, conversions, CPA, and frequency
- A flagged list of high-efficiency and low-efficiency segments
- A 3–4 sentence plain-language summary identifying the strongest and weakest audience bets, and any segment showing signs of fatigue

Prompt 4: Detect creative fatigue before it kills performance

Identify TikTok Ads creatives that are showing signs of fatigue so we can act before performance drops significantly.

Account: [Your TikTok Ads Account Name]
Date range: Last 3 weeks (retrieve daily data)

1. Track daily performance trends per ad

For all ads that have been running for more than 7 days, pull daily data for:
- CTR
- Video completion rate
- CPA
- Frequency (impressions ÷ estimated reach)

2. Calculate performance shift

For each ad, compare the last 3-day average to the first 3-day average for the same metrics.

Flag any ad where:
- CTR has declined more than 25%
- Completion rate has declined more than 20%
- CPA has increased more than 30%

3. Frequency correlation

Plot the relationship between rising frequency and declining CTR or completion rate for flagged ads.

Identify the frequency level at which performance typically begins to erode for our account (this becomes a practical early-warning threshold).

4. Output format

Return:
- A fatigue monitoring table showing all flagged ads with their metric changes and current frequency level
- A column estimating days until replacement is recommended, based on the rate of decline
- A 2–3 sentence summary of the account-wide fatigue threshold (e.g., "Performance typically begins declining at a frequency of X, approximately Y days after launch")

Prompt 5: Budget allocation and campaign efficiency audit

Review how our TikTok Ads budget is distributed across campaigns and identify whether spend is concentrated on the best-performing areas.

Account: [Your TikTok Ads Account Name]
Date range: Last 30 days

1. Retrieve campaign-level spend and performance

For all campaigns, pull:
- Total spend and share of total account spend (%)
- Conversions and conversion share (%)
- ROAS and CPA
- Impressions and impression share (%)

2. Build a spend-to-performance matrix

Flag campaigns that fall into each of these categories:

- High spend, high ROAS (scale candidates — these are working)
- High spend, low ROAS (efficiency problems — consider reducing budget)
- Low spend, high ROAS (underfunded opportunities — consider increasing budget)
- Low spend, low ROAS (candidates for pause or restructure)

3. Pacing and budget utilization

Identify any campaigns that are consistently hitting their daily budget cap. These campaigns may have more room to scale without efficiency loss.

Conversely, flag campaigns spending significantly below their budget cap with mediocre results.

4. Output format

Return:
- A campaign efficiency matrix table sorted by spend share, with ROAS, CPA, and a classification label (Scale / Reduce / Grow / Pause)
- A budget reallocation summary (3–4 sentences) identifying the highest-confidence moves based purely on the data
- A flag for any campaigns constrained by budget cap that could absorb more spend at current efficiency levels

Prompt 6: TikTok Ads vs. other paid channels comparison (data blending)

Note: This analysis requires multiple ad platform connectors to be active in Windsor.

Compare TikTok Ads performance against our other paid advertising channels to assess its relative efficiency and role in the media mix.

Accounts to compare: TikTok Ads, [Meta Ads / Google Ads — specify which are connected]
Date range: Last 30 days

1. Pull comparable metrics across channels

For each channel, retrieve:
- Total spend
- Total impressions
- Total clicks and CTR
- Total conversions and CPA
- ROAS (where conversion value is tracked)
- CPM

2. Normalize for fair comparison

Calculate each channel's share of:
- Total cross-channel spend
- Total cross-channel conversions
- Total cross-channel revenue (if tracked)

This shows whether TikTok Ads is punching above or below its weight relative to its budget share.

3. Audience overlap consideration

Note whether TikTok Ads is reaching a meaningfully different audience age group than the other channels. Channels targeting different demographics serve different roles in the funnel.

4. Cost efficiency comparison

Rank channels by CPM, CPC, CPA, and ROAS in a single comparison table.

5. Output format

Return:
- A cross-channel comparison table with all key metrics, one row per channel
- A spend-share vs. conversion-share analysis (is TikTok over- or under-indexed?)
- A 3–4 sentence plain-language summary of where TikTok Ads fits in the current media mix and whether the data suggests it deserves more or less budget relative to other channels

LinkedIn Ads prompts

Prompt 1: Weekly LinkedIn Ads performance report

Prepare a weekly LinkedIn Ads performance overview for our account.

Account: [Your LinkedIn Ads Account Name]
Date range: [SPECIFY_DATE_RANGE]
Comparison period: previous matching period

1. Retrieve account-level KPIs

Pull total spend, impressions, clicks, conversions (leads or other tracked actions), and revenue for the period.
Calculate CTR, CPC, CPM, CPL (cost per lead), and conversion rate.
Show period-over-period percentage change for each.

2. Campaign-level breakdown

For all active campaigns, retrieve spend, impressions, clicks, conversions, and CPL.
Rank campaigns from lowest to highest CPL (most efficient first).

3. Ad format split

Break down spend, CTR, and CPL by ad format: Sponsored Content (single image), Video Ads, Carousel Ads, Text Ads, Dynamic Ads, and Lead Gen Forms.

4. Structure the output as a visual report with these sections

- Section 1: Account scorecard — spend, CPL, CTR, CPC, CPM, conversion rate with period-over-period changes
- Section 2: Campaign rankings — table ordered by CPL
- Section 3: Format efficiency — which ad formats deliver the lowest CPL and highest CTR
- Section 4: Lead Gen Form performance (if applicable) — form open rate, submission rate, CPL from forms vs. landing page campaigns
- Section 5: Week-over-week trend — 4-week trend line for spend, CPL, and conversion rate

Presentation rules:
- Use "we" and "our" language.
- Add a short "What this means" note under each section.
- Keep language accessible to non-technical stakeholders.
- Include bid strategy or campaign structure recommendations.

Prompt 2: Audience targeting efficiency

Analyze how our LinkedIn Ads are performing across different professional audience segments to identify where we are getting the best and worst returns.

Account: [Your LinkedIn Ads Account Name]
Date range: Last 30 days

1. Break down performance by professional targeting dimensions

Retrieve spend, clicks, conversions, and CPL segmented by:
- Job seniority (e.g., Director, VP, C-Suite, Manager, Senior IC)
- Job function (e.g., Marketing, IT, Finance, Operations, Sales)
- Industry
- Company size

2. Identify high-efficiency segments

Flag any segment combination where CPL is more than 20% below the account average. These segments are delivering above-average lead quality at lower cost.

3. Identify wasteful segments

Flag segments where:
- Spend share exceeds 8% of account total, AND
- CPL is more than 40% above account average, AND
- Conversion rate is below account average

These are the clearest candidates for exclusion or bid reduction.

4. Lead quality proxy

If multiple conversion types are tracked (e.g., form submission vs. demo request vs. trial sign-up), compare conversion type mix across segments. Segments driving higher-intent actions (demo, trial) are more valuable even at a higher CPL.

5. Output format

Return:
- A targeting efficiency table organized by dimension (seniority, function, industry, company size), showing spend, conversions, CPL, and conversion rate for each
- A flagged list of top 3 high-efficiency and top 3 wasteful segments
- A 3–4 sentence summary identifying the professional profile that best describes our highest-converting audience

Prompt 3: Lead Gen Form vs. landing page conversion comparison

Compare the performance of LinkedIn Lead Gen Forms against campaigns driving traffic to our website landing pages to understand which approach generates leads more efficiently.

Account: [Your LinkedIn Ads Account Name]
Date range: Last 60 days

1. Identify Lead Gen Form campaigns

Pull all campaigns where Lead Gen Forms are the conversion mechanism.
For each, retrieve:
- Spend
- Form impressions (people who saw the ad)
- Form opens (people who clicked to open the form)
- Form submissions (completed leads)
- Form open rate (opens ÷ impressions)
- Form completion rate (submissions ÷ opens)
- CPL

2. Landing page campaigns

Pull all campaigns driving to website landing pages.
For each, retrieve:
- Spend
- Clicks
- Conversions (as reported by LinkedIn conversion tracking)
- CTR
- Conversion rate (conversions ÷ clicks)
- CPL

3. Side-by-side comparison

Compare the two groups on:
- Average CPL
- Average conversion rate
- Cost per click (for landing page campaigns) vs. cost per form open (for LGF campaigns)
- Total lead volume

4. Segment the comparison by audience

Where possible, compare LGF vs. landing page CPL within the same audience segment (seniority, industry, etc.) to control for audience quality differences.

5. Output format

Return:
- A side-by-side comparison table for LGF campaigns vs. landing page campaigns
- A breakdown by audience segment where data allows
- A 3–5 sentence summary with a plain-language conclusion on which approach is more cost-efficient for our account and whether this varies by audience type

Prompt 4: Creative and ad format fatigue audit

Review our LinkedIn Ads creatives for signs of audience fatigue and identify which formats are maintaining performance.

Account: [Your LinkedIn Ads Account Name]
Date range: Last 6 weeks (retrieve weekly data)

1. Track performance over time per ad

For all ads running for more than 2 weeks, pull weekly data for:
- CTR
- Engagement rate (clicks + reactions + comments + shares ÷ impressions)
- Frequency (impressions ÷ estimated reach)
- CPL

2. Flag fatigue signals

Identify ads where, in the most recent 2-week period compared to the first 2 weeks of running:
- CTR has dropped more than 20%
- CPL has increased more than 30%
- Engagement rate has declined more than 25%

3. Format comparison under fatigue conditions

Compare how quickly Single Image, Video, and Carousel formats show fatigue signals.
LinkedIn audiences are typically smaller and more professionally defined than other platforms, so fatigue can set in faster.

4. Frequency benchmarks

Calculate the average frequency at which each format begins to show fatigue in our account.
This becomes a practical refresh trigger threshold.

5. Output format

Return:
- A fatigue tracking table showing all ads running more than 2 weeks, with their week-over-week CTR and CPL trend and current frequency
- A flagged list of ads requiring creative replacement
- A 2–3 sentence summary of the typical frequency threshold at which LinkedIn ad fatigue sets in for our campaigns, broken down by format

Prompt 5: Cost efficiency audit and budget reallocation plan

Audit how efficiently our LinkedIn Ads budget is being used across campaigns and identify the most data-supported budget moves.

Account: [Your LinkedIn Ads Account Name]
Date range: Last 30 days

1. Campaign spend and performance matrix

For all campaigns, pull:
- Total spend and % share of account total
- Conversions (leads) and % share of account total
- CPL
- CTR and CPC
- Conversion rate

2. Classify each campaign

Assign each campaign to one of four categories based on its spend share and CPL relative to the account average:

- High spend + low CPL: Top performers — protect and consider scaling
- High spend + high CPL: Budget risk — requires optimization or spend reduction
- Low spend + low CPL: Hidden gems — may be worth increasing investment
- Low spend + high CPL: Lowest priority — deprioritize or pause

3. Budget concentration risk

Calculate what percentage of total spend flows to the top 2 campaigns. If more than 60% of spend is concentrated in 2 campaigns, flag this as concentration risk.

4. Pacing check

Flag any campaigns that have been budget-limited for more than 3 days in the past 2 weeks (hitting daily budget caps) — these may have room to scale at current efficiency levels.

5. Output format

Return:
- A campaign matrix table with classification label for each campaign
- A budget allocation summary showing current vs. recommended spend direction for each category
- A 3–4 sentence plain-language conclusion on the biggest budget efficiency gains available, based purely on performance data

Prompt 7: Campaign performance diagnosis and optimization plan

Analyze all active campaigns in our LinkedIn Ads account, clearly identify the top and worst performers, and produce a prioritized, data-backed optimization plan.

Account: [Your LinkedIn Ads Account Name]
Date range: Last 30 days

PART 1: CAMPAIGN PERFORMANCE RANKING

1. Pull campaign-level performance data

For all active campaigns, retrieve:
- Total spend and % share of account budget
- Impressions and reach
- Clicks and CTR
- Conversions (leads or other tracked actions) and conversion rate
- CPL (cost per lead) or CPA
- Engagement rate (for Sponsored Content campaigns)
- Lead Gen Form completion rate (for LGF campaigns, where applicable)

2. Rank campaigns

Create two ranked lists:

TOP PERFORMERS — campaigns where:
- CPL is more than 15% below the account average, AND
- Conversion volume is meaningful (at least 5 conversions in the period)

WORST PERFORMERS — campaigns where:
- CPL is more than 40% above the account average, OR
- Spend exceeds 8% of account total with fewer than 3 conversions, OR
- CTR is more than 50% below the account average (low ad relevance signal)

For any campaign that doesn't fit either extreme, mark it as "Watch" — stable but not yet proven.

PART 2: ROOT CAUSE ANALYSIS

For each campaign in the WORST PERFORMERS list, diagnose the likely cause of underperformance by examining the available data signals:

- High CPL + Low CTR → Ad creative or messaging is not resonating with the target audience; the problem is likely at the attention stage
- High CPL + Decent CTR + Low Conversion Rate → Traffic quality issue or landing page/form friction; the problem is post-click
- High CPL + High Frequency → Audience saturation; the campaign may have exhausted the available pool
- High CPL + Narrow Audience + Low Impression Volume → Audience too restrictive; delivery is constrained by targeting
- High CPL + Broad Audience + High Volume but Low CVR → Audience too broad; reaching low-intent professionals

Assign each underperforming campaign one primary diagnosis from the list above.

For each campaign in the TOP PERFORMERS list, identify the key factor driving its success:
- Audience composition (which targeting dimensions make this segment high-intent?)
- Ad format (is a specific format consistently outperforming others?)
- Offer or CTA (is a specific conversion action converting at a lower cost?)

PART 3: PRIORITIZED OPTIMIZATION PLAN

Based on the analysis above, generate a specific action plan with one recommendation per underperforming or "Watch" campaign.

For each recommendation, specify:
- Campaign name
- Primary issue (from the diagnosis above)
- Recommended action (be concrete — e.g., "Reduce audience to Director+ seniority only and test a new headline emphasizing business outcome rather than product feature", not "optimize targeting")
- Expected impact (e.g., "Should reduce CPL by reducing low-intent clicks from IC-level audience")
- Priority level: High (fix within 3 days) / Medium (fix this week) / Low (test next cycle)

For TOP PERFORMERS, include a scaling recommendation:
- Whether to increase daily budget and by how much (based on current pacing and frequency levels)
- Whether to duplicate the campaign with a new audience segment to extend reach

Output format:
Return:
- A campaign ranking table (one row per campaign: name, spend, conversions, CPL, diagnosis label, priority)
- An optimization action plan table (one row per campaign: name, issue, recommended action, expected impact, priority)
- A 4–5 sentence executive summary explaining the overall account health, the most urgent actions, and what the top performers reveal about our most effective audience and messaging approach

Shopify prompts

Prompt 1: Weekly store performance overview

Using [Store Name]'s Shopify data connected via Windsor, generate a weekly performance summary for [SPECIFY DATE RANGE], compared to the prior matching period.

Cover the following:

Overall store health:
- Total orders and total gross revenue
- Total discount value applied and net revenue after discounts
- Average order value (overall and split by new vs. returning customers)
- Number of new customers vs. returning customers and their share of total revenue
- Period-over-period change for each metric (%)

Top products:
- Top 5 products by net revenue this week
- Top 5 products by units sold
- Any product that appeared in the top 5 last week but has dropped out this week (flag as a potential issue)

Fulfillment health:
- Total orders with unfulfilled line items at the end of the period
- Any orders that have been unfulfilled for more than 3 days

Order channel breakdown:
- Net revenue split by sales channel (e.g., online store, social, marketplace)
- Week-over-week channel shift: which channel grew or shrank most?

Format the output as a clean weekly scorecard with a brief plain-language summary (3–4 sentences) flagging the most important change to investigate this week.

Prompt 2: Understand your sales channel mix

Using [Store Name]'s Shopify order and customer data in Windsor, analyze the performance of each sales channel for the last 60 days.

For each sales channel recorded in the order data, calculate:
- Total orders
- Total gross revenue and net revenue (after discounts)
- Average order value
- Total discount value as a % of gross revenue (discount dependency)
- Share of orders from new customers vs. returning customers
- Refund rate (refunded orders ÷ total orders, %)

Then rank channels by:
1. Net revenue (highest to lowest)
2. Average order value (highest to lowest)
3. New customer share (highest to lowest — channels with a high new customer share are acquisition drivers)

Flag any channel where:
- Discount dependency exceeds 25% of gross revenue
- Refund rate is more than 5 percentage points above the store average
- More than 80% of orders are from returning customers (functioning as retention, not acquisition)

Format the output as a channel comparison table followed by a 3-sentence summary of which channels are driving genuine growth vs. which are propped up by discounts or repeat buyers only.

Prompt 3: Identify your most and least profitable products

Using [Store Name]'s Shopify order, line-item, and refund data in Windsor, analyze product-level profitability for the last 90 days.

For each product, calculate:
- Total gross revenue
- Total discount value applied to orders containing this product
- Total refund value for returned units
- Net revenue (gross minus discounts and refunds)
- Units sold and refund rate (%)
- Number of customers who bought this product and then placed a second order within 90 days (repeat purchase signal)

Rank all products by net revenue from highest to lowest.

Flag products in the following categories:
- Top 20 by gross revenue with a refund rate above 15% (volume leaders eating margin)
- Top 20 by gross revenue where discount dependency exceeds 25% of gross (only selling on promotion)
- Any product outside the top 30 by gross revenue where the 90-day repeat purchase rate exceeds 40% (hidden loyalty drivers worth promoting more)

Format the output as a full product profitability table followed by three separate flagged lists with a one-line explanation for why each product is flagged.

Prompt 4: Spot inventory risk before products sell out

Using [Store Name]'s Shopify order and inventory data in Windsor, identify products at risk of stocking out in the next 21 days.

For each active product SKU, calculate:
- Average daily units sold over the last 14 days (recent velocity)
- Average daily units sold over the last 30 days (baseline velocity)
- Whether velocity is accelerating or decelerating (compare 14-day vs. 30-day averages)
- Current fulfillable inventory quantity
- Estimated days of stock remaining at the 14-day sales velocity

Flag all SKUs where estimated days of stock remaining is fewer than 21 days.

For accelerating SKUs (14-day velocity more than 20% above 30-day average), recalculate days remaining using the higher recent rate.

For each flagged SKU, return:
- Product name
- Current inventory
- Daily sales velocity (14-day average)
- Estimated stockout date
- Velocity trend (accelerating / stable / decelerating)
- Urgency: Critical (under 7 days), High (7–14 days), Medium (14–21 days)

Format the output as a stockout risk report ordered by urgency, then by estimated days remaining.

Prompt 5: Measure discount and promotion effectiveness

Using [Store Name]'s Shopify order data in Windsor, evaluate the effectiveness of promotions and discount codes used in the last 90 days.

For each discount code or promotional tag, calculate:
- Total orders and gross revenue
- Total discount value given (absolute and as % of gross revenue)
- Net revenue after discounts
- Average order value: orders using this code vs. store average without a discount
- New customer rate (% of code users who were first-time buyers)
- Returning customer rate (% who had ordered before — indicates promotional dependency risk)
- 60-day repeat purchase rate for customers first acquired through this code

Flag promotions where:
- Discount value exceeds 30% of gross order revenue
- More than 70% of usage is by returning customers
- 60-day repeat purchase rate for new customers acquired through the promotion is more than 20% below the store average (bargain hunters who don't come back)

Rank all promotions by net revenue contribution (highest to lowest).
Format the output as a promotion effectiveness table followed by a flagged risk list with a one-line explanation per flag.

Prompt 6: Detect churn risk and build a re-engagement list

Using [Store Name]'s Shopify customer and order data in Windsor, identify customers at risk of churning and build a prioritized re-engagement list.

Step 1 — Establish repurchase windows
For all customers with 2 or more lifetime orders, calculate the average number of days between their consecutive purchases (personal repurchase interval).

Step 2 — Flag overdue customers
Identify customers where:
- Days since their last order exceeds 1.5× their personal average repurchase interval, AND
- They have not ordered in the last 30 days

Segment flagged customers into:
- High priority: 3+ lifetime orders AND lifetime spend in the top 30% of the customer base
- Medium priority: 2 lifetime orders OR lifetime spend between the 30th and 70th percentile
- Lower priority: 2 lifetime orders, below-average spend

Step 3 — Re-engagement context
For all high-priority at-risk customers, retrieve:
- Last product category purchased
- Last order value
- Total lifetime spend
- Days overdue vs. their personal repurchase interval

Step 4 — Revenue at stake
Calculate the total number of customers in each segment and their combined lifetime revenue — this is the revenue base at risk.

Format the output as a churn risk summary table (counts and revenue at stake per segment) followed by a high-priority customer detail table with last-purchase context for re-engagement messaging.

Prompt 7: Enhance product descriptions

Using [Store Name]'s Shopify data in Windsor, identify what top-selling products have in common and use those patterns to enhance descriptions of underperforming products. Date range: last 90 days.

Part 1 — Define top sellers:
- For all products with at least 20 orders, calculate net revenue (after discounts and refunds), refund rate, 60-day repeat purchase rate, and discount dependency (discount value ÷ gross revenue).
- Define top sellers as products with above-average net revenue, below-average refund rate, and below-average discount dependency. List the top 10.

Part 2 — Extract what makes them work:
- Review the current titles, descriptions, and tags of the top 10.
- Summarize the shared patterns in plain language: Are descriptions benefit-led or feature-led? Short or long? Do they include use cases? Specific sensory or outcome language?
- Produce a one-paragraph "description brief" that captures what effective copy looks like on this store.

Part 3 — Prioritize products for enhancement:
Identify the 10 products most worth rewriting based on:
- High traffic or visibility but below-average order conversion
- Above-average price point but below-average order volume
- Above-average return rate (description may be creating wrong expectations)

For each, note the specific gap vs. the top-seller description pattern.

Part 4 — Generate enhanced descriptions:
For each of the 10 prioritized products, write an enhanced description using the structure from Part 2:
- Opening line: one benefit-led sentence (under 20 words)
- 3–5 short feature-to-benefit bullet points (under 15 words each)
- One use-case sentence
- A revised product title if the current one is generic

Brand tone: [SPECIFY — e.g., "warm and direct, like a knowledgeable friend recommending something they use"]
Also provide 3 suggested tags per product based on the new copy.
Close with a one-paragraph copy guidelines summary that can be applied across the full catalog going forward.

Note: Flag any product where the return rate suggests a product quality issue rather than a description problem — better copy won't fix that.

Amazon Ads prompts

Prompt 1: Weekly Amazon Ads performance overview

Using [Account Name]'s Amazon Ads data connected via Windsor, generate a weekly performance summary for [SPECIFY DATE RANGE] vs. the prior matching period.

Account-level summary:
- Total spend, total attributed sales, ACOS (advertising cost of sale), and ROAS
- Total clicks, impressions, and CTR
- Period-over-period % change for each metric

Ad type breakdown:
- Split all metrics above by ad type: Sponsored Products, Sponsored Brands, and Sponsored Display.
- Identify which ad type delivered the lowest ACOS and the highest attributed sales this week.

Campaign rankings:
- For all active campaigns, rank by ROAS from highest to lowest.
- Flag campaigns where ACOS exceeded 40% (or your target threshold: [SPECIFY]) and spend exceeded $50 in the period.

Week-over-week shifts:
Identify any campaigns where:
- ACOS increased more than 20% vs. the prior week
- Attributed sales dropped more than 20% while spend held steady or increased
- CTR dropped more than 25% week over week

Format the output as a weekly scorecard with an executive summary (3–4 sentences) and three supporting tables: account summary, ad type comparison, and campaign rankings.

Prompt 2: ACOS audit — find campaigns spending without returning

Using [Account Name]'s Amazon Ads data in Windsor, audit campaign efficiency for the last 30 days.

For all campaigns with more than $30 in spend in the period, calculate:
- Total spend
- Total attributed sales (14-day attribution window)
- ACOS and ROAS
- Clicks and CTR
- Conversion rate (orders ÷ clicks)

Flag campaigns in the following categories:
- High spend, high ACOS: spend above $50 AND ACOS more than 1.5× the account average
- Zero-converting: spend above $30 with zero attributed sales in the 14-day window
- Click sink: CTR above average but conversion rate significantly below average (traffic that isn't buying — possible product page or pricing issue)

For each flagged campaign, note:
- The ad type (Sponsored Products, Sponsored Brands, or Sponsored Display)
- Total spend in the period
- ACOS vs. account average
- Recommended action: Pause, Reduce bids, Review product page, or Restructure targeting

Estimate total spend in the period attributable to campaigns in the High ACOS and Zero-Converting categories combined.
Format the output as an efficiency audit table ordered by spend descending, followed by a total wasted spend estimate and a prioritized action list.

Prompt 3: Search term analysis — find winners and eliminate waste

Using [Account Name]'s Amazon Ads search term data in Windsor, analyze which customer search queries are driving results and which are wasting spend. Date range: last 30 days.

For all search terms with at least $5 in spend or at least 50 impressions, retrieve:
- The search term text
- Total spend
- Total clicks and CTR
- Total orders and conversion rate
- ACOS

Identify three groups:

Group 1 — Scale candidates
Search terms where ACOS is below the account target AND orders are 2 or more.
These are high-intent queries worth adding as exact match keywords if not already.

Group 2 — Waste candidates
Search terms where spend exceeds $10 AND orders = 0.
These are candidates for negative keywords. Flag any that are clearly irrelevant to the product based on the query language.

Group 3 — Borderline terms
Search terms with 1 order but ACOS more than 2× the target. These need more data before a decision.

For Group 1, provide the search term, spend, orders, ACOS, and a recommendation (Add as exact match / Already running as exact match).
For Group 2, provide the search term, spend, clicks, and a negative keyword recommendation (Exact / Phrase).

Format the output as three separate tables, followed by a summary of total spend recoverable by adding the Group 2 negatives.

Prompt 4: New-to-brand performance — are your ads growing your customer base?

Using [Account Name]'s Amazon Ads data in Windsor, analyze new-to-brand performance for Sponsored Brands and Sponsored Display campaigns over the last 30 days.

For each eligible campaign, retrieve:
- Total spend
- Total attributed orders (14-day window)
- New-to-brand orders (first-time orders for products within the brand)
- New-to-brand order rate (new-to-brand orders ÷ total orders, %)
- New-to-brand attributed sales value
- New-to-brand sales as a % of total attributed sales
- Cost per new-to-brand order (total spend ÷ new-to-brand orders)

Rank campaigns by new-to-brand order volume from highest to lowest.

Flag campaigns where:
- New-to-brand order rate is below 20% (majority of attributed orders are from existing customers — limited audience growth)
- Cost per new-to-brand order exceeds $[SPECIFY your acceptable threshold]

Compare the cost per new-to-brand order across Sponsored Brands vs. Sponsored Display to identify which format acquires new customers more efficiently.

Format the output as a new-to-brand performance table followed by a 3-sentence summary on whether the account's advertising is growing the brand's customer base efficiently.

Prompt 5: Ad type efficiency comparison — SP vs. SB vs. SD

Using [Account Name]'s Amazon Ads data in Windsor, compare the efficiency of Sponsored Products, Sponsored Brands, and Sponsored Display for the last 60 days.

For each ad type, calculate:
- Total spend and share of account total spend (%)
- Total attributed sales and share of account total sales (%)
- ACOS and ROAS
- Clicks and CTR
- Average conversion rate (orders ÷ clicks)
- Cost per click (CPC)
- New-to-brand order rate (where available for SB and SD)

Identify for each ad type:
- Whether it is punching above or below its budget share in attributed sales
- Whether its ACOS is above or below the account average
- The single highest-performing campaign within the ad type (by ROAS)
- The single worst-performing campaign within the ad type (by ACOS)

Then produce a budget efficiency matrix: for each ad type, is the current spend allocation justified by performance, or does the data suggest reallocating budget toward or away from this format?

Format the output as a three-column comparison table (one column per ad type), followed by the budget efficiency matrix and a 3-sentence plain-language conclusion.

Prompt 6: Product (ASIN) level performance audit

Using [Account Name]'s Amazon Ads data in Windsor, audit advertising performance at the individual product (ASIN) level for the last 30 days.

For each advertised product or ASIN with more than $20 in spend, calculate:
- Total ad spend
- Total attributed sales (14-day window)
- ACOS
- ROAS
- Clicks and conversion rate
- Spend share of account total (%)

Classify each product into one of four categories:
- Scale: ACOS below target AND conversion rate above account average → increase investment
- Maintain: ACOS at or near target, stable conversion rate → hold current spend
- Optimize: ACOS above target but conversion rate is reasonable → review bids and targeting before cutting
- Pause: ACOS more than 2× account target OR zero conversions with spend above $30 → reduce or pause

Flag any product in the Pause category where spend exceeds $50 — these represent the highest-priority cost recovery opportunities.

Rank all products by spend descending. Format the output as an ASIN performance table with classification label, followed by a summary of total spend by category and the estimated spend reduction available from pausing Pause-category products.

TikTok Organic prompts

Prompt 1: Weekly organic content performance overview

Using [Account Name]'s TikTok Organic data connected via Windsor, generate a weekly content performance summary for [SPECIFY DATE RANGE] vs. the prior matching period.

Channel-level overview:
- Total video views, likes, comments, and shares for the period
- Period-over-period % change for each metric
- Total followers gained in the period (if available)

Top-performing videos:
- Top 5 videos by view count this week
- Top 5 videos by engagement rate (likes + comments + shares ÷ views)
- Flag any video that reached more than 2× the account's average view count — these are outlier performers worth analyzing

Engagement rate benchmark:
- Calculate the average engagement rate across all videos published in the period
- Flag any video where engagement rate is more than 50% below the account average for the week (underperformers)

Audience demographics snapshot:
- Current audience breakdown by gender and age group
- Flag any significant shift from the prior period

Format the output as a weekly scorecard followed by top/bottom performer tables and a 2-sentence demographic note.

Prompt 2: Identify your best content formats and themes

Using [Account Name]'s TikTok Organic data in Windsor, analyze which content formats and patterns perform best across the last 90 days.

Pull all videos published in the period with their view count, likes, comments, shares, and engagement rate.

Video length analysis:
- Group videos into length buckets: under 15 seconds, 15–30 seconds, 30–60 seconds, and over 60 seconds.
- For each bucket, calculate average view count, average engagement rate, and number of videos published.
- Identify which length range delivers the highest average engagement rate.

Post time analysis:
- Group videos by day of week and time of day published.
- For each combination, calculate average views and engagement rate (flag cells with fewer than 3 videos as low-confidence).
- Identify the top 3 day + time combinations by average engagement rate.

Engagement depth:
- Identify the top 10 videos by engagement rate. List their view counts, like counts, comment counts, and share counts.
- Look for common patterns: do they share a similar length, topic language in the caption, or publish timing?

Consistency check:
- Identify any video format or length bucket where performance has declined meaningfully over the last 30 days vs. the 60 days before that.

Format the output as three analysis tables (length, timing, top performers) followed by a 3–4 sentence summary of the dominant content patterns driving the strongest results.

Prompt 3: Audience profile and growth analysis

Using [Account Name]'s TikTok Organic audience data in Windsor, analyze the current audience profile and how it has shifted over the last 90 days.

Current audience composition:
- Breakdown by gender (% of total audience)
- Breakdown by age group
- Compare the current composition to the composition 90 days ago — flag any demographic that has grown or shrunk by more than 5 percentage points

Audience vs. target alignment:
- Based on the demographic breakdown, describe in plain language whether the current audience profile aligns with the brand's target customer (note: use the target customer profile provided here — [DESCRIBE YOUR TARGET CUSTOMER briefly, e.g., "women aged 25–40 interested in wellness"] — to evaluate fit).
- Flag any demographic that represents more than 20% of the audience but is outside the target profile.

Content-audience correlation:
- Cross-reference the top 10 videos by engagement rate with the periods of strongest audience growth.
- Do the best-performing videos appear to correlate with growth in the target demographic, or are they attracting an audience outside the brand's intended profile?

Format the output as an audience composition table (current vs. 90 days ago), a target alignment assessment, and a 3-sentence correlation summary.

Prompt 4: Find the optimal posting cadence and timing

Using [Account Name]'s TikTok Organic data in Windsor, analyze the relationship between posting frequency, timing, and performance over the last 90 days.

Posting frequency analysis:
- How many videos were published per week on average?
- Calculate average weekly views and engagement rate for weeks where 1–2 videos were posted, 3–4 videos, and 5+ videos
- Did higher posting frequency correlate with higher total weekly views, or did individual video performance decline?

Day and time performance:
- For all videos published, group by day of week and time block (morning 6–10, midday 10–14, afternoon 14–18, evening 18–22)
- For each combination, calculate average views and average engagement rate (flag any cell with fewer than 2 videos as insufficient data)
- Identify the top 3 posting windows by average views and the top 3 by average engagement rate — note if these differ

Consistency signal:
- Were there any weeks with zero posts? If so, did the following week show a drop in views or engagement rate compared to weeks with consistent posting?

Format the output as a frequency analysis table, a timing heatmap table, and a 2–3 sentence recommendation on optimal posting cadence and timing based purely on the account's own data.

Prompt 5: Track share and save rates as quality signals

Using [Account Name]'s TikTok Organic data in Windsor, analyze share and engagement depth signals across videos published in the last 90 days.

For all videos published in the period, calculate:
- View count
- Like count and like rate (likes ÷ views, %)
- Comment count and comment rate (comments ÷ views, %)
- Share count and share rate (shares ÷ views, %)
- Overall engagement rate (likes + comments + shares ÷ views, %)

Rank videos by share rate from highest to lowest.

Flag the top 10 videos by share rate and examine what they have in common:
- Are they shorter or longer than the account average?
- Do they share any observable topic or format pattern (based on caption language or publish timing)?
- Do they have higher or lower like rates than average — a high share rate with a lower like rate can indicate broadly useful but not emotionally engaging content

Flag any videos with high view counts (top 25% of account) but a share rate more than 50% below the account average — these are reaching people but not creating advocates.

Format the output as a ranked engagement depth table followed by a top 10 share-rate analysis with pattern notes and a 3-sentence summary.

Prompt 6: Build a monthly content plan

Using [Account Name]'s TikTok Organic data in Windsor, analyze the last 90 days and generate a monthly content plan for [TARGET MONTH].

Part 1 — What's working:
- Top 10 videos by engagement rate (likes + comments + shares ÷ views), share rate, and total views
- Note any video appearing across multiple lists — strongest signals
- Most common video length, publish day/time, and observable topic or style pattern among top performers
- Classify top performers by primary engagement type: Viral reach (high views + shares), Community builder (high comments), or Save-worthy (informative/reference content people return to)
- Flag any format or topic theme that appeared 3+ times but consistently underperformed — exclude from plan
- Note current audience gender/age breakdown; flag if top-performing content attracts an audience outside the target profile: [DESCRIBE your target customer]

Part 2 — Monthly content calendar:
(Publishing target: [SPECIFY — e.g., 5 videos per week])

For each planned video, specify:
- Publish date and time (aligned with best-performing windows)
- Hook (the opening 2–3 seconds — describe this first, it determines whether viewers stay)
- Concept and format type (based on what performed best in Part 1)
- Engagement target: Reach / Community / Save-worthy
- Caption direction and CTA (one line)
- Estimated length

Present the calendar as a table:
Date | Hook | Format | Concept | Engagement Target | Caption/CTA | Length

Close with a 3-sentence rationale citing specific signals from Part 1, plus a one-sentence "formats to avoid" note.

YouTube prompts

Prompt 1: Channel performance overview and growth trend

Using [Channel Name]'s YouTube data connected via Windsor, generate a channel performance overview for [SPECIFY DATE RANGE] vs. the prior matching period.

Channel-level summary:
- Total views, total watch time (hours), and average view duration
- Subscribers gained and lost, and net subscriber change
- Period-over-period % change for each metric
- Overall click-through rate on thumbnails (impressions CTR)

Top videos:
- Top 5 videos by view count in the period
- Top 5 videos by average view percentage (audience retention)
- Top 5 videos by subscriber gain attributable to the video
- Flag any video appearing in the top 5 for views but not for retention — high views with low retention often signals a misleading title or thumbnail

Traffic source breakdown:
- Split views by traffic source type (search, suggested, browse, external, direct)
- Which source drove the most views? Which drove the most watch time?
- Any meaningful shift in source mix vs. the prior period?

Device breakdown:
- Split views and average view duration by device type (mobile, desktop, tablet, TV)
- Flag any device where average view duration is more than 20% below the channel average

Format the output as a channel scorecard followed by three supporting tables (top videos, traffic sources, device breakdown) and a 3-sentence growth summary.

Prompt 2: Audience retention analysis — find where viewers drop off

Using [Channel Name]'s YouTube data in Windsor, analyze audience retention performance across all videos published in the last 90 days.

For all videos with at least 100 views, retrieve:
- Video title
- Video length (duration)
- Total views
- Average view duration (minutes and seconds)
- Average view percentage (% of video watched on average)
- Thumbnail CTR (impressions click-through rate)

Retention ranking:
- Rank all qualifying videos by average view percentage from highest to lowest.
- Identify the top 10 (strongest retention) and the bottom 10 (weakest retention).

Length vs. retention relationship
- Group videos into length buckets: under 5 minutes, 5–10 minutes, 10–20 minutes, over 20 minutes.
- For each bucket, calculate the average view percentage across all videos in that group.
- Identify whether shorter or longer videos retain a higher percentage of viewers for this channel.

CTR vs. retention gap:
- Identify videos where thumbnail CTR is in the top 25% of the channel but average view percentage is in the bottom 25%.
- These are videos that promised something in the thumbnail/title that the content didn't deliver — the most common cause of viewer drop-off and reduced algorithmic distribution.

Format the output as a full retention table, a length-bucket summary, and a flagged list of CTR-retention gap videos with a one-line explanation.

Prompt 3: Find your best traffic sources for subscriber growth

Using [Channel Name]'s YouTube data in Windsor, analyze traffic source performance with a focus on which sources produce the most engaged and subscribed viewers. Date range: last 90 days.

For each traffic source type (Search, Suggested Videos, Browse features, External, Direct, Playlists, Notifications), calculate:
- Total views
- Total watch time (hours)
- Average view duration
- Subscribers gained (attributed to this source)
- Subscriber conversion rate (subscribers gained ÷ views from this source, %)

Rank sources by subscriber conversion rate (highest to lowest).

Identify:
- The source driving the most views overall
- The source driving the most watch time per view (highest average view duration)
- The source with the highest subscriber conversion rate (viewers most likely to subscribe)
- Any source with high view volume but a subscriber conversion rate below 0.5% (traffic that watches but doesn't commit)

For the top 3 videos by subscriber gain, retrieve which traffic source drove most of their views — do the channel's best subscriber-generating videos share a common traffic source pattern?

Format the output as a source comparison table, a ranking table by subscriber conversion rate, and a 3-sentence summary on which traffic sources to prioritize for subscriber growth.

Prompt 4: Identify content topics and formats driving subscriber growth

Using [Channel Name]'s YouTube data in Windsor, identify which content topics and formats are most effective at growing the subscriber base. Date range: last 180 days.

For all videos published in the period with at least 200 views, retrieve:
- Video title
- Total views
- Average view percentage
- Subscribers gained
- Subscriber gain rate (subscribers ÷ views, %)
- Thumbnail CTR

Subscriber growth ranking:
- Rank all qualifying videos by subscribers gained from highest to lowest.
- Identify the top 15 subscriber-generating videos.

Topic pattern analysis:
For the top 15, identify observable topic or format patterns from the video titles:
- Do they cluster around specific topics or content themes?
- Are they predominantly a specific format (tutorial, review, reaction, list, documentary-style)?
- Are they shorter or longer than the channel average?

Shorts vs. long-form comparison:
- If the channel publishes both Shorts (under 60 seconds) and standard videos, calculate the average subscriber gain rate separately for each format.
- Flag whether Shorts are generating subscribers at a higher or lower rate than long-form content.

Format the output as a subscriber growth ranking table, a topic pattern summary (written in plain language), and a 2-sentence format comparison note.

Prompt 5: Weekly content health check

Using [Channel Name]'s YouTube data in Windsor, run a performance health check on all videos published in the last 14 days.

For each video published in the last 14 days, retrieve:
- Total views so far
- Thumbnail CTR (impressions click-through rate)
- Average view percentage
- Subscribers gained
- Primary traffic source (what's driving most views so far?)

Compare each video's CTR and average view percentage against the channel's 90-day average for each metric.

Flag videos where:
- CTR is more than 20% below the channel average — the thumbnail or title may not be compelling enough to drive clicks
- Average view percentage is more than 20% below the channel average — viewers are dropping off faster than usual; content structure or topic pacing may need review
- Both CTR and view percentage are below average — the video has both an acquisition problem and a retention problem; consider a thumbnail or title update

For flagged videos, note:
- Current view count vs. expected view count at this point (estimated from the channel's average views-per-day for the first 14 days across recent videos)
- Recommended action: Test new thumbnail, Review title, Evaluate content structure, or Monitor

Format the output as a health check table for all videos published in the period, with flags and recommended actions clearly labeled.

Prompt 6: Generate a monthly content plan

Using [Channel Name]'s YouTube data in Windsor, analyze the last 90 days of performance and generate a monthly content plan for [TARGET MONTH].

Part 1 — What's working:
- Top 10 videos by subscriber gain, average view percentage, and total views
- Note any video appearing across multiple lists — these are the strongest signals
- Most common video length, publish day/time, and topic format among top performers
- Flag videos with high CTR but low retention (title over-promised) and any format that consistently underperformed — exclude both from the plan
- For top performers, identify the dominant traffic source (Search vs. Suggested/Browse) — this determines whether to prioritize SEO-focused topics or broadly appealing hooks

Part 2 — Monthly content calendar:
For each planned video, specify:
- Publish date and time (based on best-performing windows)
- Working title and video concept (grounded in the topic patterns from Part 1)
- Format type (tutorial, comparison, story, list, etc.)
- Recommended length
- Traffic strategy: Search (specific query in title) or Suggested/Browse (strong thumbnail hook, broad appeal)
- Thumbnail direction (one line describing the visual hook)
- Target outcome: Subscriber growth / Watch time / New audience reach

For Shorts (if applicable): publish date, opening hook, and how it connects to a standard video that week.

Present the calendar as a table:
Date | Type | Working Title | Length | Traffic Strategy | Thumbnail Direction | Target Outcome

Close with a 3-sentence rationale explaining which content patterns the plan prioritizes and why, based on the data.Google Business Profile (Google My Business) prompts

Google Business Profile (Google My Business) prompts

Prompt 1: Weekly local visibility and customer action overview

Using [Business Name]'s Google My Business data connected via Windsor, generate a weekly local visibility summary for [SPECIFY DATE RANGE] vs. the prior matching period.

Visibility metrics:
- Total impressions from Google Search and Google Maps
- Period-over-period % change for each impression source
- Impressions by device type (mobile vs. desktop) — flag any shift greater than 10%

Customer actions:
- Total website clicks, call clicks, and direction requests
- Period-over-period % change for each action type
- Total customer actions combined — which action type accounts for the largest share?

Review activity:
- Total reviews received in the period
- Average star rating for new reviews in the period vs. the all-time average
- Flag if any new reviews bring the average rating below [SPECIFY threshold, e.g., 4.0]

Search discovery vs. direct visits:
- Split impressions between customers who found the listing through a search vs. those who searched for the business directly (branded)
- Is discovery search (non-branded) growing, stable, or declining?

Format the output as a weekly scorecard with period-over-period deltas and a 2-sentence flag summary on the most important change to investigate.

Prompt 2: Analyze which search queries are driving profile visibility

Using [Business Name]'s Google My Business data in Windsor, analyze which search queries are driving profile visibility and which are underperforming on click-through. Date range: last 60 days.

Top queries by impressions:
- List the top 20 search terms that triggered the most profile impressions.
- For each term, show: impressions, clicks (where available), and CTR.

Query intent classification:
Classify each of the top 20 queries into:
- Branded (includes the business name or a known product/service name)
- Category (generic local service or product type, e.g., "coffee shop near me")
- Problem/need (customer describing a need, e.g., "best pizza for delivery")
- Navigational (customer looking for directions or hours)

Opportunity identification:
Flag queries where:
- Impressions are in the top 10 but CTR is significantly below the average for the period — these queries are reaching people who don't click through; the listing may need stronger photos, clearer descriptions, or updated information
- Category or problem-type queries appear frequently — these reveal how customers describe the business's product or service in their own words, which can inform profile descriptions and local content

Format the output as a query table, a classification breakdown, and a 3-sentence opportunity summary highlighting which query themes the listing should be better optimized for.

Prompt 3: Customer action analysis — calls, directions, and website clicks

Using [Business Name]'s Google My Business data in Windsor, analyze customer action patterns for the last 60 days.

Action type breakdown:
- Total calls, direction requests, website clicks, and any other available action types (bookings, messages, food orders) for the period
- Each action type as a % of total actions
- Period-over-period trend: which action type has grown most and which has declined most vs. the prior 60 days?

Day and time patterns:
For call clicks and direction requests specifically:
- Which days of the week generate the most calls? The most direction requests?
- Which time blocks (morning, midday, afternoon, evening) generate peak call volume?
- Flag any day or time block where action volume is significantly higher than average — this is a signal for staffing, opening hours, or promotional timing

Device type comparison:
- Split website clicks and call clicks by device type (mobile vs. desktop)
- Flag if more than 70% of call clicks come from mobile — this is typical but confirms the importance of mobile call experience

Conversion rate proxy:
- Calculate the ratio of total customer actions to total impressions (action rate)
- Compare to the prior period — is the listing converting a higher or lower share of viewers into engaged customers?

Format the output as an action breakdown table, a day/time pattern table, and a 3-sentence operational insight summary (e.g., "Peak call volume is on [day] between [time] — consider ensuring [action]").

Prompt 4: Multi-location performance comparison

Using [Business Name]'s Google My Business data for all connected locations in Windsor, produce a multi-location performance comparison for the last 60 days.

Location-level comparison table:
For each connected location, retrieve:
- Total impressions (search + maps)
- Total customer actions (calls + directions + website clicks combined)
- Action rate (total actions ÷ total impressions, %)
- Average star rating (current)
- Total reviews received in the period
- Most common customer action type (calls / directions / website clicks)

Rankings:
Rank locations by:
1. Total impressions (visibility)
2. Action rate (conversion efficiency — impressions turning into engaged customers)
3. Average star rating

Outlier flags:
Flag locations where:
- Action rate is more than 30% below the multi-location average (high visibility but low conversion — possible profile quality issue)
- Average rating is below 3.8 stars (reputation risk)
- Review count in the period is zero (no recent reviews — a local ranking signal weakness)

Best practice signals:
- Identify the top 2 locations by action rate and note any observable differences in their profile data that might explain their stronger performance (e.g., more complete information, more recent posts, stronger photo activity).

Format the output as a location comparison table with ranking columns, a flagged issues list, and a 2-sentence best practice observation.

Prompt 5: Measure the impact of GBP posts and photo updates

Using [Business Name]'s Google My Business data in Windsor, analyze the relationship between content activity (posts and photo updates) and profile performance over the last 6 months.

Posting activity timeline:
Pull available post data for the 6-month period: dates of posts published and any performance metrics available (views, click-throughs per post).
Map posting activity onto a monthly timeline — identify months with high posting frequency vs. months with low or no activity.

Performance correlation:
For the same 6-month period, pull monthly data for:
- Total impressions
- Total customer actions (calls + directions + website clicks)
- Action rate (actions ÷ impressions)

Compare months with active posting (2+ posts) against months with low posting (0–1 posts):
- Was average impression volume higher during active months?
- Was action rate higher during active months?
- Flag any month where a significant increase in activity coincided with a notable uptick in impressions or actions

Photo performance:
If photo view data is available, calculate:
- Total photo views in the period
- Whether photo view trends correlate with periods of higher profile engagement overall

Format the output as a monthly activity and performance table (posts per month, impressions, actions, action rate), a correlation summary written in plain language, and a 2-sentence recommendation on whether the data justifies investing in a more consistent GBP posting cadence.

LinkedIn Company Page (Organic) prompts

Prompt 1: Weekly page performance overview

Using [Company Name]'s LinkedIn Pages data in Windsor, generate a weekly organic performance summary for [SPECIFY DATE RANGE] vs. the prior matching period.

Cover:
- Total impressions, unique impressions, clicks, and overall engagement rate
- Net follower change (gained minus lost) and period-over-period % change for each metric
- Top 3 posts by engagement rate: show impressions, clicks, reactions, comments, and shares for each
- CTA button clicks and page section views (main page, careers, life) — flag any meaningful shift vs. prior week
- 2-sentence plain-language summary of the most important change to act on

Prompt 2: Identify content formats and topics that drive the most engagement

Using [Company Name]'s LinkedIn Pages data in Windsor, analyze content performance across the last 90 days.

For all posts published in the period:
- Group by content type (article, image, video, document/carousel, text-only) and calculate average impressions, engagement rate, click-through rate, and share count per post for each type
- Rank the top 10 posts by engagement rate — note any observable topic or theme patterns (e.g., thought leadership, product news, company culture, data/research)
- Identify any format with 3+ posts but consistently below-average engagement rate
- Flag the top 3 posts by click-through rate separately — these reveal what drives off-platform traffic, not just on-platform engagement

Summarize in 3 sentences: which format and topic combination the data suggests prioritizing, and which to reduce. 
Based on these insights, suggest a content plan for the next 10 posts.

Prompt 3: Analyze whether you’re reaching the right audience

Using [Company Name]'s LinkedIn Pages data in Windsor, analyze audience demographics and evaluate ICP alignment.

Our target audience profile: [DESCRIBE — e.g., "Director level and above, in IT or Finance functions, at companies with 500+ employees, primarily in North America and Western Europe"]

Retrieve the current follower breakdown by:
- Seniority level (% of total followers per tier)
- Job function (top 5 by share)
- Industry (top 5 by share)
- Company size
- Region (top 5)

For each dimension, flag whether the current audience over- or under-indexes vs. the target profile.

Then: for the last 30 days, compare which seniority levels and functions are represented in post engagement vs. in the overall follower base — are senior decision-makers engaging at a higher or lower rate than their follower share suggests?

Output as a demographic alignment table followed by a 3-sentence ICP gap summary.

Prompt 4: Find the best posting times and cadence

Using [Company Name]'s LinkedIn Pages data in Windsor, identify the optimal posting cadence and timing from the last 90 days of post history.

For all posts published in the period:
- Group by day of week and time block (morning 7–10, midday 10–13, afternoon 13–17, evening 17–20)
- For each combination, calculate average impressions and average engagement rate (flag any cell with fewer than 2 posts as low-confidence)
- Identify the top 3 day/time combinations by average impressions and the top 3 by average engagement rate — note where they overlap or differ

Cadence check:
- How many posts per week was the page publishing on average?
- Were there any 2+ week gaps with no posts? Did the weeks following a gap show lower impressions than average?

Close with a 2-sentence recommendation on optimal posting frequency and timing based purely on this page's data.

Prompt 5: Organic reach vs. viral reach breakdown

Using [Company Name]'s LinkedIn Pages data in Windsor, analyze organic vs. viral reach for all posts published in the last 60 days.

For each post, retrieve:
- Total impressions, organic reach, and viral reach
- Viral reach as a % of total reach
- Engagement rate and share count

Rank posts by viral reach percentage from highest to lowest.

For the top 10 posts by viral reach share:
- What is the average share count vs. the account average? (Shares are the primary driver of viral reach on LinkedIn)
- Do they cluster around specific content types or topic themes?
- Is there a pattern in their format (video, document, image, text)?

Flag any posts where viral reach exceeded organic reach — these are the clearest examples of content that spread beyond the existing audience.

Summarize in 3 sentences: what drives viral distribution on this page, based on the data.

Prompt 6: Follower growth analysis and source attribution

Using [Company Name]'s LinkedIn Pages data in Windsor, analyze follower growth over the last 6 months.

Pull monthly and weekly follower gains, losses, and net change for the full period.

Identify:
- The 3 weeks with the highest net follower gain — what content was published in those windows?
- Any week where follower losses exceeded gains — flag these as worth investigating
- Whether overall growth is accelerating, plateauing, or declining over the 6-month period

Cross-reference with post activity:
- Were high-growth weeks associated with higher-than-average posting frequency, specific content types, or notably high-performing posts?
- Are there weeks with zero posts where growth also stalled?

If LinkedIn Ads data is also connected in Windsor, flag whether growth spikes correlate with paid campaign activity — this helps separate organic audience growth from paid follower acquisition.

Output as a monthly trend table followed by a 3-sentence growth diagnosis.

HubSpot prompts

Prompt 1: Sales pipeline health check

Using [Company Name]'s HubSpot data in Windsor, run a pipeline health check for all open deals.

Calculate:
- Total open pipeline value (sum of all deal amounts)
- Weighted pipeline (deal amount × close probability for each deal, summed)
- Pipeline by stage: deal count, total value, and average deal value per stage
- Deals closing in the next 30, 60, and 90 days by value

Flag deals that need attention:
- Open deals with no activity recorded in the last 14 days (stalled)
- Deals where the expected close date has passed but the deal is still open
- Deals in an early stage but with a close date less than 14 days away (unrealistic timeline)

Break down by deal owner:
- Total pipeline value per rep
- Weighted pipeline per rep
- Number of stalled deals per rep

Output as a pipeline summary table, a stalled deals list, and a 3-sentence assessment of the pipeline's overall health and the most urgent actions.

Prompt 2: Lead source and marketing funnel performance

Using [Company Name]'s HubSpot data in Windsor, analyze lead source performance and funnel conversion for the last 90 days.

For each lead source recorded in the contact data:
- Total new contacts created
- Contacts that progressed to Marketing Qualified Lead (MQL)
- Contacts that progressed to Sales Qualified Lead (SQL)
- Contacts that converted to Customer
- Conversion rate at each stage (contact → MQL, MQL → SQL, SQL → Customer)
- Average time from contact creation to MQL, and MQL to SQL (velocity)

Rank sources by customer conversion rate (highest to lowest).

Flag any source where:
- Lead volume is high but SQL conversion rate is more than 30% below the average (quantity without quality)
- Customer conversion rate is above average but volume is low (high-efficiency source worth scaling)

Output as a source funnel table followed by a 3-sentence summary identifying the highest-quality acquisition channels and the biggest conversion gaps.

Prompt 3: Email campaign performance analysis

Using [Company Name]'s HubSpot email data in Windsor, analyze campaign performance for the last 60 days.

For all email campaigns sent in the period, calculate:
- Delivery rate, open rate, click-to-open rate (CTOR), click rate, unsubscribe rate, and bounce rate
- Total recipients and total replies (where tracked)

Rank campaigns by CTOR from highest to lowest (CTOR removes delivery variance and isolates content quality).

Flag campaigns where:
- Unsubscribe rate is more than 2× the account average (content or audience mismatch)
- Open rate is above average but CTOR is below average (subject line working, content not delivering)
- Bounce rate exceeds 3% (list quality issue)

Identify the top 3 campaigns by CTOR — note any common characteristics in their send timing, audience segment, or email type (newsletter, nurture, promotional, transactional).

Output as a campaign performance table followed by a 3-sentence summary of what the strongest-performing emails have in common.

Prompt 4: Deal velocity and win rate by rep and source

Using [Company Name]'s HubSpot deal data in Windsor, analyze sales performance for deals created and closed in the last 90 days.

For each deal owner (sales rep), calculate:
- Total deals created
- Total deals won and win rate (%)
- Total closed revenue
- Average deal size (won deals only)
- Average sales cycle length (days from deal created to closed-won)

For each lead source, calculate the same metrics for deals that originated from that source.

Identify:
- The rep with the highest win rate (regardless of volume)
- The rep with the highest average deal size
- The source with the highest win rate and fastest average sales cycle
- Any rep where win rate is more than 15 percentage points below the team average (flag for coaching review)

Output as two tables (by rep and by source) followed by a 3-sentence summary of the standout performance patterns and the biggest gaps.

Prompt 5: Contact engagement and re-engagement audit

Using [Company Name]'s HubSpot contact data in Windsor, audit contact engagement and identify re-engagement opportunities.

Segment all contacts into:
- Active: last activity or email engagement within the last 30 days
- Warm: last activity 31–90 days ago
- Cold: last activity 91–180 days ago
- Dormant: no activity in more than 180 days

For each segment, calculate:
- Total contact count and % of total database
- Breakdown by lifecycle stage (lead, MQL, SQL, customer, other)
- Average number of lifetime activities per contact

Re-engagement priority list:
Flag contacts in the Cold segment who meet all of these criteria:
- Lifecycle stage is MQL or SQL (had commercial intent at some point)
- At least 3 recorded activities lifetime (showed real engagement before going quiet)
- Have not unsubscribed from email

These are the highest-priority win-back targets — they were engaged and qualified but have since drifted.

Output as a segment summary table followed by a count and brief profile of the re-engagement priority list.

Prompt 6: Revenue forecast based on pipeline and close probability

Using [Company Name]'s HubSpot deal data in Windsor, build a revenue forecast for the next 90 days.

For all open deals with a close date within the next 90 days:
- Calculate weighted pipeline value per deal (amount × close probability)
- Group by month (this month, next month, month after)
- For each month, show: total deal count, total unweighted pipeline, total weighted pipeline

Confidence tiers:
Group deals into three tiers based on stage probability:
- High confidence: close probability above 70%
- Medium confidence: 40–70%
- Low confidence: below 40%

For each tier and month, show deal count and weighted pipeline value.

Flag deals in the High Confidence tier where:
- No activity has been recorded in the last 7 days (risk of slipping despite high probability)
- Close date is within 14 days but deal is in an early-stage (likely to push)

Output as a monthly weighted forecast table by confidence tier, followed by a list of at-risk high-confidence deals and a 2-sentence forecast summary.

Instagram Public prompts

Prompt 1: Competitor content benchmarking

Using Windsor's Instagram Public connector, analyze organic content performance for these competitor accounts over the last 60 days:
[LIST 3–5 COMPETITOR HANDLES]

For each account and each post, retrieve: media type (image, video, carousel, reel), like count, comment count, video view count (where applicable), and post date.

For each account, calculate:
- Average likes, comments, and video views per post
- Average engagement rate per post (likes + comments ÷ estimated follower count, where available)
- Breakdown of posts by media type and average performance per type
- Top 3 posts by engagement for each account

Cross-account comparison:
- Which account has the highest average engagement rate?
- Which media type performs best across all accounts combined?
- Which account publishes most frequently and does higher frequency correlate with stronger or weaker per-post engagement?

Output as a cross-account comparison table followed by a 3-sentence summary of the dominant content patterns in this competitive set.

Prompt 2: Identify top-performing content formats in your niche

Using Windsor's Instagram Public connector, analyze content format performance across these accounts in [DESCRIBE NICHE — e.g., "sustainable fashion brands"]:
[LIST 5–8 ACCOUNTS]

For all posts from the last 90 days, group by media type (image, video, carousel, reel) and calculate:
- Average like count per post type
- Average comment count per post type
- Average video view count for video/reel posts
- Number of posts published in each format across all accounts combined

Identify:
- The format with the highest average like count across the full account set
- The format with the highest average comment rate (comments ÷ estimated reach proxy) — high comment rate indicates content that sparks conversation
- Any format that is used infrequently by competitors but shows strong engagement when it does appear (potential whitespace opportunity)

Output as a format performance table followed by a 2-sentence niche insight summary.

Prompt 3: Track competitor posting frequency and consistency

Using Windsor's Instagram Public connector, analyze the posting frequency and consistency of these accounts over the last 90 days:
[LIST 3–5 ACCOUNTS]

For each account, calculate:
- Total posts published in the period
- Average posts per week
- Most common posting days (top 2 days by frequency)
- Any week in the period with zero posts (consistency gaps)
- Week-over-week trend: is posting frequency increasing, stable, or decreasing?

Performance vs. frequency:
- For each account, compare weeks with above-average posting frequency to weeks with below-average — does higher frequency correlate with higher average engagement per post, or does it dilute performance?

Output as a frequency comparison table for all accounts followed by a 2-sentence observation on the relationship between posting cadence and engagement in this competitive set.

Prompt 4: Find high-engagement content themes in your niche

Using Windsor's Instagram Public connector, analyze the captions and engagement of top-performing posts across these accounts in [NICHE]:
[LIST 5–8 ACCOUNTS]
Date range: last 90 days

Retrieve all posts from the period with their caption text, like count, comment count, and media type.

From the top 20 posts by engagement (likes + comments) across all accounts:
- Identify recurring themes or topics in the captions (e.g., educational, storytelling, product feature, social proof, behind-the-scenes, trend commentary)
- Identify any recurring structural patterns in the captions of high-engagement posts (e.g., question opening, statistic opening, short punchy hook, list format)
- Note whether the top 20 posts tend to use hashtags heavily, sparingly, or not at all

From the bottom 20 posts by engagement (minimum 10 posts per account to qualify):
- Identify whether there are topic or structural patterns that consistently underperform

Output as a theme frequency analysis for top performers, a contrasting note on underperformers, and a 3-sentence content brief summarizing the angles most likely to resonate in this niche.

Prompt 5: Influencer and collaborator vetting

Using Windsor's Instagram Public connector, audit the following accounts as potential collaboration partners:
[LIST 2–5 INFLUENCER/CREATOR HANDLES]
Date range: last 60 days

For each account, calculate:
- Average like count per post
- Average comment count per post
- Engagement rate per post (likes + comments ÷ follower count, %)
- Engagement consistency: what is the standard deviation in engagement rate across posts? (Low variability = reliable audience; very high variability may indicate boosted posts)
- Like-to-comment ratio: accounts with very high likes but very few comments may have inflated like counts

For each account's top 5 posts by engagement:
- What media type are they?
- Do they show a pattern in caption style, topic, or visual format?

Flag any account where:
- Average engagement rate is below 1% (low audience activity relative to follower size)
- Engagement variance is extremely high (some posts wildly outperform while most underperform — suggests boosted or viral outliers, not a reliable engaged audience)

Output as a side-by-side vetting table followed by a brief Go / Investigate / Pass recommendation for each account.

Prompt 6: Track a competitor’s content strategy shift over time

Using Windsor's Instagram Public connector, analyze how [COMPETITOR HANDLE]'s content strategy has shifted over the last 6 months.

Split the 6-month period into two equal halves (months 1–3 vs. months 4–6, with month 6 being the most recent).

For each half, calculate:
- Total posts published and average posts per week
- Breakdown of posts by media type (image, video, carousel, reel) and share of total per type
- Average engagement rate per post
- Average like count and comment count per post

Compare the two periods:
- Which media type grew or shrank as a share of total content?
- Did posting frequency increase or decrease?
- Did average engagement rate improve or decline overall?

Identify any single format where the shift was most pronounced (e.g., "Reels went from 10% to 45% of all posts") and whether that shift correlated with improved or worsened engagement.

Output as a period-comparison table followed by a 3-sentence strategy shift summary.

Google Ad Manager prompts

Prompt 1: Weekly revenue and inventory performance overview

Using [Publisher Name]'s Google Ad Manager data in Windsor, generate a weekly revenue and inventory performance summary for [SPECIFY DATE RANGE] vs. the prior matching period.

Calculate for the full account:
- Total ad impressions, total ad requests, fill rate (filled impressions ÷ ad requests), and unfilled impressions
- Total revenue, average CPM, and total clicks
- Period-over-period % change for each metric

Diagnose the revenue change:
- If revenue changed, identify whether the primary driver was a change in impressions, fill rate, or CPM (calculate each factor's contribution to the total change)

By ad unit (top 10 by revenue):
- Revenue, impressions, fill rate, and CPM for each
- Flag any ad unit where fill rate dropped more than 5 percentage points vs. the prior period
- Flag any ad unit where CPM dropped more than 15% vs. the prior period

Output as a revenue summary, a change diagnosis, and a flagged ad unit table.

Prompt 2: Fill rate audit — find and recover unfilled inventory

Using [Publisher Name]'s Google Ad Manager data in Windsor, audit fill rate performance across all ad units for the last 30 days.

For each ad unit with more than 1,000 ad requests in the period:
- Total ad requests, filled impressions, unfilled impressions, and fill rate (%)
- Revenue generated and estimated revenue lost to unfilled impressions (unfilled impressions × account average CPM)
- Compare fill rate to the account-wide average — flag units more than 10 percentage points below average

Rank ad units by total estimated revenue lost to unfilled impressions (highest loss first).

For the top 5 highest-loss ad units:
- Is the fill rate problem consistent across the period, or is it isolated to specific days or time blocks? (Pull daily fill rate if available)
- Note the ad unit size/type for each — some sizes inherently attract less demand than others

Output as a fill rate audit table ranked by estimated revenue loss, followed by a 2-sentence priority summary.

Prompt 3: CPM performance analysis by ad unit and placement

Using [Publisher Name]'s Google Ad Manager data in Windsor, analyze CPM performance by ad unit for the last 60 days.

For each ad unit with at least 2,000 impressions in the period:
- Total impressions, total revenue, and effective CPM (eCPM = revenue ÷ impressions × 1,000)
- Impression share of account total (%)
- Revenue share of account total (%)

Identify yield efficiency:
- For each ad unit, calculate the revenue share ÷ impression share ratio — a ratio above 1.0 means the unit generates proportionally more revenue than its impression share; below 1.0 means it's underperforming its share of traffic
- Flag units where eCPM is more than 30% above the account average (high-value inventory)
- Flag units where eCPM is more than 30% below the account average AND impression share exceeds 10% (large inventory pool with low yield — highest optimization priority)

Output as an eCPM analysis table ordered by impression share, with yield efficiency ratios, followed by a 3-sentence summary of where yield is strongest and where the biggest improvement opportunity lies.

Prompt 4: Click-through rate and ad engagement analysis

Using [Publisher Name]'s Google Ad Manager data in Windsor, analyze click-through rate (CTR) across ad units for the last 30 days.

For each ad unit with at least 2,000 impressions in the period:
- Total impressions, total clicks, and CTR (clicks ÷ impressions × 100)
- Revenue and eCPM
- Compare CTR to the account average — flag units more than 50% above or below average

Identify:
- Top 5 ad units by CTR — do they share a placement pattern (e.g., in-content vs. sidebar, above-fold vs. below-fold)?
- Bottom 5 ad units by CTR — are these concentrated in specific placements or formats?
- Any unit where CTR is declining week over week for 3 or more consecutive weeks (sustained engagement deterioration)

Note: Very high CTR can indicate accidental clicks (e.g., mobile units placed near interactive elements) and should be investigated alongside bounce rates from ad traffic where possible.

Output as a CTR analysis table followed by a 2-sentence flag summary.

Prompt 5: Revenue trend analysis and anomaly detection

Using [Publisher Name]'s Google Ad Manager data in Windsor, analyze daily revenue trends and detect anomalies for the last 60 days.

Pull daily data for: total revenue, total impressions, overall fill rate, and average eCPM.

Calculate a 14-day rolling average for each metric to establish a baseline.

Flag any day where:
- Total revenue deviated more than 20% from the 14-day rolling average
- Fill rate dropped more than 8 percentage points from the rolling average
- eCPM dropped more than 15% from the rolling average

For each flagged day, diagnose the primary driver:
- Revenue drop driven mainly by impressions declining? (Traffic issue, not ad ops)
- Revenue drop driven mainly by fill rate declining? (Demand or technical issue)
- Revenue drop driven mainly by eCPM declining? (Pricing or auction dynamics issue)

Output as a daily trend table with anomaly flags, a diagnosis column for each flagged day, and a 2-sentence summary of the most significant anomalies in the period.

Prompt 6: Ad unit performance comparison for layout optimization

Using [Publisher Name]'s Google Ad Manager data in Windsor, produce an ad unit performance ranking for the last 90 days.

For each ad unit with at least 1,000 impressions in the period, retrieve:
- Total impressions, fill rate, eCPM, and total revenue
- Revenue share of account total (%)
- Impressions share of account total (%)
- Yield efficiency ratio (revenue share ÷ impression share — above 1.0 is above-average yield)

Rank all qualifying ad units by total revenue.

Flag:
- Top 5 by revenue: these are the most critical placements to protect in any layout change
- Bottom 10 by revenue with more than 0.5% of total impressions: these are large inventory pools generating minimal revenue — candidates for format change, removal, or floor price adjustment
- Any unit where revenue has declined more than 20% in the most recent 30 days vs. the 30 days before that (recent deterioration)

Output as a full unit ranking table, a top-5 priority list, and a bottom-10 flag list with a 2-sentence recommendation on layout optimization priorities.

GoHighLevel prompts

Prompt 1: Pipeline and opportunity health overview

Using [Account Name]'s GoHighLevel data in Windsor, generate a pipeline health overview for all open opportunities.

For each pipeline stage, calculate:
- Total number of open opportunities
- Total monetary value of opportunities in that stage
- Average opportunity value per stage

Identify stalled opportunities:
- Flag any opportunity with no recorded activity in the last 14 days (across all stages)
- Flag any opportunity that has been in the same stage for more than 21 days

Break down by assigned user:
- Total open opportunities and total pipeline value per team member
- Number of stalled opportunities per team member

Output as a stage-by-stage pipeline table, a stalled opportunities list ordered by value, and a 2-sentence summary of where the most value is at risk.

Prompt 2: Contact and conversation engagement audit

Using [Account Name]'s GoHighLevel data in Windsor, audit contact engagement and conversation status.

Segment all contacts by last message activity:
- Active: last message within 7 days
- Recent: last message 8–30 days ago
- Inactive: last message 31–90 days ago
- Dormant: no message activity in more than 90 days

For each segment, show contact count and breakdown by assigned user.

Flag high-priority follow-up contacts:
- Contacts with unread inbound messages (the prospect reached out but hasn't been responded to)
- Contacts tagged as [SPECIFY YOUR HIGH-INTENT TAGS — e.g., "hot lead", "requested callback"] with no outbound message in the last 7 days
- Contacts in the Recent segment who are assigned to an opportunity in an active pipeline stage (they're in a deal but communication has lapsed)

Output as a segmentation table by assigned user and a prioritized follow-up list, ordered by last inbound message date for unread contacts.

Prompt 3: Revenue and order performance analysis

Using [Account Name]'s GoHighLevel order and payment data in Windsor, analyze revenue performance for the last 60 days.

Calculate:
- Total gross revenue (sum of all completed payments)
- Total refunds and net revenue (gross minus refunds)
- Total discounts applied and discount value as a % of gross revenue
- Number of orders, average order value, and refund rate (refunded orders ÷ total orders)

Break down by product or offer name (where available):
- Revenue, order count, average order value, and refund rate per product/offer
- Flag any product where refund rate exceeds 10%

Trend:
- Pull weekly revenue for the period — is revenue growing, stable, or declining week over week?
- Flag any week where refunds exceeded 15% of that week's gross revenue

Output as a product/offer performance table, a weekly revenue trend, and a 2-sentence summary flagging the most important revenue or refund risk.

Prompt 4: Task and team productivity audit

Using [Account Name]'s GoHighLevel task data in Windsor, audit team task performance for the last 30 days.

For all tasks created or due in the period, calculate:
- Total tasks created, total completed, total overdue (past due date, not completed)
- Completion rate (completed ÷ total, %) and average overdue rate
- Break down each metric by assigned user

Flag:
- Users where overdue rate exceeds 20% — workload or prioritization issue
- Any tasks overdue by more than 7 days — these are the most at-risk items
- Days or weeks where task creation significantly exceeded completions — pipeline of backlog

Output as a user productivity table (tasks created, completed, overdue, completion rate per user), an overdue tasks list ordered by days overdue, and a 2-sentence workload summary.

Prompt 5: Location and sub-account performance comparison

Using [Agency/Business Name]'s GoHighLevel data in Windsor, compare performance across all connected locations or sub-accounts.

For each location/sub-account, calculate:
- Total open pipeline value and number of open opportunities
- Total contacts and number of contacts with activity in the last 30 days (active contact rate)
- Unread inbound messages count (follow-up gap indicator)
- Task completion rate for the last 30 days
- Revenue or orders completed in the last 30 days (where available)

Rank locations by active contact rate (highest to lowest) — this is the clearest indicator of operational engagement, not just data volume.

Flag any location where:
- Unread inbound messages exceed 10 (follow-up backlog)
- Task completion rate is below 70%
- No pipeline activity has been recorded in the last 14 days

Output as a side-by-side location comparison table followed by a flagged issue list and a 2-sentence summary of which locations need immediate attention.

Prompt 6: Identify stalled leads and build a re-engagement priority list

Using [Account Name]'s GoHighLevel data in Windsor, identify the highest-priority stalled leads for re-engagement.

Define stalled leads as: open opportunities with no recorded activity in the last 30 days, in any pipeline stage except the final won/lost stage.

For all stalled leads, retrieve:
- Contact name, assigned user, pipeline stage, opportunity monetary value, and days since last activity
- Number of prior conversations or messages recorded (indicates prior engagement level)
- Any tags assigned to the contact (use to identify intent signals)

Score each stalled lead by re-engagement priority:
- High priority: opportunity value above [SPECIFY threshold — e.g., $1,000] AND at least 3 prior recorded interactions AND in a mid-to-late pipeline stage
- Medium priority: opportunity value above threshold OR multiple prior interactions, but not both
- Low priority: minimal prior interaction and low opportunity value

Rank all high-priority leads by opportunity value (highest first).

Output as a priority-tiered re-engagement list with opportunity value, stage, days stalled, and prior interaction count — formatted for direct use as a call/message queue.

Conclusion

Set up Windsor MCP once, bookmark your favorite prompts, and you’ll have a reusable AI “assistant” for ongoing analysis, optimization, and reporting across all your connected data sources.

Looking for specific prompt ideas we haven’t listed yet? Get guidance on the most effective use of Windsor MCP from our data expert: Book a Demo Now.

🚀 Get started with Windsor MCP today with a 30-day free trial: https://onboard.windsor.ai/ and experience the power of AI-driven analytics.

Tired of juggling fragmented data? Get started with Windsor.ai today to create a single source of truth

Let us help you automate data integration and AI-driven insights, so you can focus on what matters—growth strategy.
g logo
fb logo
big query data
youtube logo
power logo
looker logo