Review Responses
Turn a 2-hour weekly review-response chore into a 15-minute workflow — on-brand, personalized drafts for every review sentiment, with your data staying local.
Plugin ID
pf-review-responses
Category
marketing
Version
v1.0
Downloads
pf-review-responses
Turn a 2-hour weekly review-response chore into a 15-minute workflow — on-brand, personalized drafts for every review sentiment, with your data staying local.
Installation
- Download the
pf-review-responses.pluginfile - Open Claude Desktop and navigate to Settings > Plugins
- Click Install Plugin and select the downloaded
.pluginfile - The plugin will be installed and available immediately
Note: All data stays local on your machine. No external API calls or cloud storage required.
Why This Exists
Small ecommerce businesses and local shops receive customer reviews on Google, Amazon, Yelp, Etsy, and other platforms — but writing thoughtful, brand-consistent replies takes significant time. Existing tools like Podium ($4,788-7,188/yr) and Birdeye ($3,588+/yr) bundle expensive enterprise features most SMBs never use. This plugin handles the part that matters most: drafting responses using professional reputation management frameworks.
Quick Start
- Run
/review:review-setup— enter your business name, industry, and preferred tone (takes 2 minutes) - Run
/review:review-load— paste reviews or upload a CSV export from any platform - Run
/review:review-respond— get a DOCX with personalized draft responses using SERVICE/LAST frameworks - Copy responses into each platform — review and personalize as needed
- Run
/review:review-reportat month-end for a benchmark-compared trend analysis - Run
/review:review-analyzeanytime for a competitive health check with NPS
Commands
| Command | Description |
|---|---|
/review:review-setup |
One-time setup: brand profile (Keller's Prism), industry benchmarks, workspace |
/review:review-load |
Load reviews with ABSA sentiment, Plutchik emotion, HEARD triage |
/review:review-respond |
Generate responses using SERVICE/LAST models with FTC compliance |
/review:review-report |
Monthly report with NPS, benchmarks, weighted sentiment, risk alerts |
/review:review-analyze |
Competitive benchmark analysis with NPS and revenue impact estimates |
/review:review-build |
Full pipeline: load + respond in one command |
/review:review-status |
Show workspace status and output files |
How It Works
- Setup — Captures brand identity using Keller's Brand Identity Prism; loads industry-specific benchmarks from BrightLocal 2024 and ReviewTrackers 2024
- Load — Reviews analyzed with Aspect-Based Sentiment Analysis (per-topic sentiment), Plutchik emotion classification (8 emotions × 3 intensities), and HEARD escalation triage (4 levels)
- Respond — Claude drafts responses using SERVICE recovery model (negatives), LAST technique (moderate complaints), and Plutchik emotion-matched tone — with platform-specific formatting and FTC 16 CFR Part 255 compliance checks
- Report — Aggregates monthly data with NPS estimation (Reichheld methodology), ABSA weighted sentiment scoring, industry benchmark comparison, and revenue impact estimates
- Analyze — Competitive positioning against industry norms with SWOT analysis and prioritized recommendations
AI-Powered Features
This plugin leverages Claude's AI capabilities with professional domain frameworks:
- Aspect-Based Sentiment Analysis (ABSA) — Analyzes sentiment per topic (product quality positive, shipping negative) with 10-category taxonomy and weighted scoring
- Plutchik Emotion Classification — 8 primary emotions × 3 intensity levels drive response tone matching (anger→de-escalate, sadness→empathize)
- SERVICE Recovery Model — Structured Sorry→Expedite→Recover→Value→Inform→Compensate→Evaluate framework for all negative reviews
- LAST Technique — Listen→Apologize→Solve→Thank for moderate complaints
- HEARD Escalation Triage — 4-level protocol (Standard→Priority→Escalation→Crisis) from Disney Institute service recovery model
- NPS Estimation — Net Promoter Score from star ratings using Reichheld's methodology with Bain & Company interpretation
- Industry Benchmarking — Compare against BrightLocal 2024 and ReviewTrackers 2024 industry averages
- Platform-Specific Responses — Google (SEO-aware), Amazon (1000-char strict), Yelp (detailed), Etsy (artisan voice)
- FTC 16 CFR Part 255 Compliance — 7-point check on every response ensuring no incentive-for-review-change language
- Brand Voice Injection — Keller's Brand Identity Prism ensures consistent personality across all responses
- Revenue Impact Estimates — Per HBR research, quantifies potential impact of review improvements
- Review Quality Signals — Word count, specificity scoring, verified purchase detection, actionable feedback identification
Feature Comparison
| Feature | pf-review-responses (Free) | Podium ($399-599/mo) | Birdeye ($299+/mo) |
|---|---|---|---|
| Response drafting | ✅ SERVICE/LAST models | ✅ AI suggestions | ✅ Smart Reply |
| Sentiment analysis | ✅ ABSA per-topic | Basic positive/negative | Basic positive/negative |
| Emotion classification | ✅ Plutchik 8-emotion | ❌ | ❌ |
| Escalation triage | ✅ HEARD 4-level | Basic flagging | Basic flagging |
| NPS estimation | ✅ Reichheld methodology | ✅ Survey-based | ✅ Survey-based |
| Industry benchmarks | ✅ BrightLocal/ReviewTrackers | ❌ | ✅ Limited |
| FTC compliance checks | ✅ 7-point gate | ❌ | ❌ |
| Platform-specific rules | ✅ 6 platforms | ✅ Multi-platform | ✅ 200+ sites |
| Live monitoring | ❌ Manual export | ✅ Real-time | ✅ Real-time |
| Review solicitation | ❌ | ✅ SMS/email | |
| Data privacy | ✅ 100% local | Cloud-based | Cloud-based |
| Price | $0 | $4,788-7,188/yr | $3,588+/yr |
Estimated Cost per Use
Disclaimer: Token estimates are approximate and based on typical usage patterns measured from skill prompt sizes. Actual costs vary with input data size, conversation length, and complexity. Estimates use Claude Sonnet 4.6 pricing ($3/1M input, $15/1M output). Cowork and Claude Desktop subscription users (Pro/Max/Team) are not charged per-token — these estimates apply only to direct Anthropic API usage. Running stages individually in fresh sessions uses fewer input tokens than running the full pipeline sequentially, because pipeline mode accumulates conversation history across stages.
Per skill (run individually in a fresh session):
| Stage | Skill Prompt | User Input | Total Input | Output | Est. Cost |
|---|---|---|---|---|---|
| review-analyze | ~1.8K | ~800 | ~5.5K | ~6.0K | ~$0.11 |
| review-respond | ~3.0K | ~800 | ~6.6K | ~6.0K | ~$0.11 |
| review-report | ~3.6K | ~800 | ~7.2K | ~6.0K | ~$0.11 |
| review-load | ~4.1K | ~2.0K | ~8.8K | ~2.0K | ~$0.06 |
| Standalone total | ~28.1K | ~20.0K | ~$0.38 |
Full pipeline (all stages in one session — context accumulates):
| Stage | Base Input | + History | Total Input | Output | Est. Cost |
|---|---|---|---|---|---|
| review-analyze | ~5.6K | 0 | ~5.6K | ~6.0K | ~$0.11 |
| review-respond | ~6.8K | ~6.8K | ~13.6K | ~6.0K | ~$0.13 |
| review-report | ~7.4K | ~13.6K | ~21.0K | ~6.0K | ~$0.15 |
| review-load | ~9.1K | ~20.4K | ~29.5K | ~2.0K | ~$0.12 |
| Pipeline total | ~69.7K | ~20.0K | ~$0.51 |
Running the full pipeline once typically costs $0.36–$0.66 in API tokens (Claude Sonnet 4.6).
Known Limitations
- No live platform monitoring — Reviews must be manually exported or pasted. The plugin does not connect to Google, Amazon, Yelp, or other platforms in real-time.
- No review solicitation — Cannot send review request emails or texts to customers (use your platform's native tools for that).
- No multi-user collaboration — One person runs the workflow at a time. Share the output DOCX via email or cloud storage for team review.
- Response drafts require review — AI drafts are starting points, not final copies. Always review and personalize before posting — especially for negative reviews.
- Character limits are informational — Platform character limits (Google: 4,096 chars, Amazon: 1,000 chars) are shown per response but not auto-enforced.
- No auto-posting — Responses must be copy-pasted into each platform manually. This is intentional — human review before posting is a best practice.
- Escalation detection is conservative — The HEARD protocol errs on the side of caution; Level 3+ flags may include reviews that don't require legal review.
- NPS is estimated — Derived from star ratings, not traditional customer surveys. Noted in all outputs per Reichheld methodology limitations.
- Benchmarks are industry averages — Individual business performance varies; top performers significantly exceed these norms.
File Structure
$WORKSPACE/
├── inbound/ # Drop CSV/XLSX review exports here
├── processing/ # Review batches and response JSON (auto-generated)
├── outbound/ # Final DOCX outputs and analysis reports
├── archive/ # Old batch files after archiving
└── .review/
├── config.json # Brand profile, industry benchmarks, framework settings
└── project.json # Pipeline state
Important Disclaimers
- AI-Generated Content: This plugin uses AI (LLM) technology which can produce inaccurate or incomplete outputs. All content should be treated as a starting point and reviewed for accuracy before use.
- Not Professional Advice: Outputs do not constitute legal, financial, tax, medical, or other professional advice. Consult qualified professionals before making decisions based on generated content.
- No Compliance Guarantee: References to FTC 16 CFR Part 255, industry standards, and platform policies are for informational purposes only. This plugin does not guarantee compliance with any law or regulation. Users are responsible for verifying all outputs meet their specific regulatory requirements.
- No Endorsement or Affiliation: Mention of BrightLocal, ReviewTrackers, Spiegel Research Center, Harvard Business Review, Podium, Birdeye, or other third-party products and research does not imply endorsement, partnership, or certification by those entities.
Ready to use Review Responses?
Download this free plugin and start using it in Claude today.
Need something different?
We build custom plugins tailored to your exact workflow.