Job Posting & Hiring Suite
pf-job-posting — Job Posting & Hiring Suite — Claude plugin
Plugin ID
pf-job-posting
Category
hr
Version
v1.0
Downloads
pf-job-posting v1.1 — Job Posting & Hiring Suite
Installation
- Download the
pf-job-posting.pluginfile - Open Claude Desktop and navigate to Settings > Plugins
- Click Install Plugin and select the downloaded
.pluginfile - The plugin will be installed and available immediately
Note: All data stays local on your machine. No external API calls or cloud storage required.
Why This Exists
SMBs currently spend $5,000–$25,000+ annually on proprietary hiring platforms like Textio, Ongig, or Datapeople. These tools audit job descriptions for bias, generate inclusive language suggestions, and help build compliant hiring scorecards. pf-job-posting does 80% of the job locally, included with your subscription.
Quick Start
- Install: Add pf-job-posting to your Claude Desktop plugins directory
- Setup: Open Claude and run
/job-posting initto create a config file - Generate Description: Run
/job-posting generate-descriptionwith your raw job details - Create Scorecard: Use
/job-posting create-scorecardto build interview questions - Run Report: Execute
/job-posting generate-reportfor hiring summary & diversity metrics
Commands
| Command | Purpose |
|---|---|
generate-description |
Write job posting with bias analysis & inclusive language suggestions |
create-scorecard |
Build STAR-based interview scorecard with evaluation rubric |
analyze-bias |
Audit existing job posting for gender, age, ability, racial bias |
generate-emails |
Create candidate email templates (offer, rejection, invite, status) |
generate-report |
Produce hiring summary with EEOC guidance checklist |
export-docx |
Convert outputs to formatted Word documents |
init |
Set up config, bias dictionaries, and local data files |
health-check |
Verify plugin setup and token usage estimates |
How It Works
DAG Architecture: Each skill (generation, analysis, scoring, reporting) works independently but shares a common configuration layer. They communicate through JSON artifacts and DOCX exports. No single pipeline is mandatory—use the skills in any order, skip steps if needed, or loop back to refine outputs.
Data Flow:
- Input: Job title, department, seniority, raw job description, org values
- Processing: Tokenize → detect bias patterns → apply inclusive language rules → generate STAR questions → compile metrics
- Output: Markdown, DOCX, JSON artifacts
What It's an Alternative To
| Platform | Annual Cost | Job Description | Bias Audit | Scorecards | Email Templates | Reporting | Offline |
|---|---|---|---|---|---|---|---|
| Textio | $15,000+ | ✓ | ✓ | ✗ | ✗ | Limited | ✗ |
| Ongig | $4,900+ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ |
| Datapeople | $12,000+ | Limited | ✓ | ✗ | ✗ | ✓ | ✗ |
| pf-job-posting | included with your subscription | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Key Differentiators
- Included with your subscription: No SaaS subscriptions, no per-candidate fees
- Data stays local: All job descriptions and scorecards processed within the Cowork environment
- Customizable bias dictionaries: Define your own gender/age/ability/racial language lists
- No vendor lock-in: Export to DOCX, JSON, markdown—use anywhere
- Offline operation: Works without internet after initial setup
- Research-backed: Built on Gaucher et al. 2011 inclusive language framework and EEOC employment law
Feature Comparison
| Feature | pf-job-posting | Textio | Ongig | Datapeople |
|---|---|---|---|---|
| Job description writing | ✓ | ✓ | ✓ | Limited |
| Bias/inclusion analysis | ✓ | ✓ | ✓ | ✓ |
| Interview scorecards | ✓ | ✗ | ✓ | ✗ |
| Candidate emails | ✓ | ✗ | ✓ | ✗ |
| Hiring report & metrics | ✓ | Limited | ✓ | ✓ |
| ATS integration | ✗ | ✓ | ✓ | ✓ |
| Real-time collaboration | ✗ | ✓ | ✓ | ✓ |
| Custom bias dictionaries | ✓ | Limited | Limited | ✗ |
| Offline operation | ✓ | ✗ | ✗ | ✗ |
| EEOC/ADA guidance | ✓ | ✓ | ✓ | ✓ |
| Source-level legal citations | ✓ | ✗ | ✗ | ✗ |
| Pay transparency compliance | ✓ (11 states) | Limited | ✗ | ✗ |
| Structured interview research | ✓ (Schmidt & Hunter) | ✗ | ✗ | ✗ |
| Adverse impact prevention | ✓ (EEOC 4/5 rule) | ✗ | ✗ | Limited |
| Price | included with your subscription | $15K+/yr | $4,900+/yr | $12K+/yr |
Estimated Cost per Use
Disclaimer: Token estimates are approximate and based on typical usage patterns measured from skill prompt sizes. Actual costs vary with input data size, conversation length, and complexity. Estimates use Claude Sonnet 4.6 pricing ($3/1M input, $15/1M output). Cowork and Claude Desktop subscription users (Pro/Max/Team) are not charged per-token — these estimates apply only to direct Anthropic API usage. Running stages individually in fresh sessions uses fewer input tokens than running the full pipeline sequentially, because pipeline mode accumulates conversation history across stages.
Per skill (run individually in a fresh session):
| Stage | Skill Prompt | User Input | Total Input | Output | Est. Cost |
|---|---|---|---|---|---|
| jp-analyze-bias | ~4.5K | ~800 | ~8.1K | ~4.5K | ~$0.09 |
| posting-scorecard | ~9.4K | ~800 | ~13.2K | ~6.0K | ~$0.13 |
| posting-hiring-report | ~7.0K | ~800 | ~11.0K | ~6.0K | ~$0.12 |
| posting-candidate-email | ~5.7K | ~800 | ~9.7K | ~5.7K | ~$0.11 |
| jp-candidate-email | ~3.6K | ~800 | ~7.2K | ~3.6K | ~$0.08 |
| jp-scorecard | ~3.7K | ~800 | ~7.3K | ~3.7K | ~$0.08 |
| posting-description | ~7.7K | ~800 | ~11.6K | ~6.0K | ~$0.12 |
| jp-hiring-kb | ~6.7K | ~800 | ~10.3K | ~6.0K | ~$0.12 |
| jp-write-jd | ~4.2K | ~800 | ~7.8K | ~6.0K | ~$0.11 |
| Standalone total | ~86.3K | ~47.5K | ~$0.97 |
Full pipeline (all stages in one session — context accumulates):
| Stage | Base Input | + History | Total Input | Output | Est. Cost |
|---|---|---|---|---|---|
| jp-analyze-bias | ~8.9K | 0 | ~8.9K | ~4.5K | ~$0.09 |
| posting-scorecard | ~13.8K | ~5.2K | ~19.1K | ~6.0K | ~$0.15 |
| posting-hiring-report | ~11.5K | ~12.1K | ~23.6K | ~6.0K | ~$0.16 |
| posting-candidate-email | ~10.2K | ~18.9K | ~29.1K | ~5.7K | ~$0.17 |
| jp-candidate-email | ~8.1K | ~25.4K | ~33.5K | ~3.6K | ~$0.16 |
| jp-scorecard | ~8.2K | ~29.8K | ~38.0K | ~3.7K | ~$0.17 |
| posting-description | ~12.2K | ~34.3K | ~46.6K | ~6.0K | ~$0.23 |
| jp-hiring-kb | ~11.2K | ~41.1K | ~52.3K | ~6.0K | ~$0.25 |
| jp-write-jd | ~8.7K | ~47.9K | ~56.7K | ~6.0K | ~$0.26 |
| Pipeline total | ~307.7K | ~47.5K | ~$1.64 |
Running the full pipeline once typically costs $1.15–$2.13 in API tokens (Claude Sonnet 4.6).
AI-Powered Features
- Centralized Employment Law KB: Source-level citations to Title VII §703, ADA §12112, ADEA §623, GINA §2000ff, FLSA §213, EEOC Uniform Guidelines 29 CFR §1607, and OFCCP 41 CFR Part 60 — referenced across all skills
- Research-Backed Bias Detection: Scans using Gaucher, Friesen & Kay (2011, JPSP) published masculine/feminine word lists with gender-coded language ratio scoring (>60% masculine = high risk, d=0.48)
- Inclusive Alternative Suggestions: Provides 1:1 rewording suggestions with legal authority citations (e.g., "native English speaker" → "fluent in English" per EEOC National Origin Guidance 2016)
- Inclusivity Scoring: Computes overall inclusivity score (0-100) using research-grounded weighting across 6 bias categories
- Adverse Impact Prevention: Validates job requirements for job-relatedness per Griggs v. Duke Power (1971) and EEOC four-fifths rule (29 CFR §1607.4(D))
- Pay Transparency Compliance: Checks salary range inclusion against 11-state law matrix (CO, CA, WA, NYC, IL, MN, HI, CT, NV, RI, MD) with specific statute citations
- Structured Interview Science: Generates scorecards per Schmidt & Hunter (1998) validity research (structured r=0.51) and Campion et al. (1997) 15-component interview methodology
- Interview Question Legality Screening: Screens every question against protected category checklist citing ADEA, ADA, Title VII, GINA, PDA, USERRA statute sections
- protective language patterns Rejection Language: Candidate emails follow SHRM best practices with consistency enforcement per Title VII disparate treatment framework
- BARS Rating Methodology: Interview scorecards use Behaviorally Anchored Rating Scales per Smith & Kendall (1963) with "score immediately" protocol
- Document Retention Guidance: Notes EEOC 29 CFR §1602.14 retention requirements (1-year minimum, 2-year for federal contractors per 41 CFR §60-1.12(a))
- Federal Contractor Support: Detects OFCCP requirements and configures VEVRAA (41 CFR §60-300) and Section 503 (41 CFR §60-741) compliance language
- Multi-Source Analysis: Analyzes both plugin-generated and externally-written job postings
Known Limitations & Workarounds
| Limitation | Workaround |
|---|---|
| Bias detection is rule-based, not ML | Manually review suggestions; refine custom bias dictionaries over time |
| No ATS integration | Export scorecards & emails as DOCX, paste into your ATS manually |
| No candidate database | Keep spreadsheet of applicants separately; reference by name in scorecard export |
| Single-user interface | Share exported DOCX files via email or shared folder; collaborate outside Claude |
| Legal disclaimer is boilerplate | Consult employment lawyer before using in regulated hiring—plugin is advisory only |
Context & Performance Guide
Session Management:
- Each Claude conversation is independent; reference prior outputs by pasting them back or exporting to DOCX
- For multi-round hiring (same job, multiple batches), keep one conversation open or paste prior config
Data Volume & Degradation:
- Small jobs (1–3 postings/batch): ~5K tokens each, minimal latency
- Medium jobs (10+ postings/batch): ~50K tokens, 2–5 min per batch
- Large jobs (100+ candidates in one report): Break into 20-candidate batches to stay under ~80K context
Tips:
- Copy/paste your org's values and job leveling guide once per conversation
- Re-run
analyze-biason competitor job postings for benchmarking - Export DOCX to preserve formatting; markdown is faster for iteration
Requirements
- Claude Desktop (October 2024+) with plugin support
- Cowork mode enabled (Cmd+K → "Settings" → "Enable Cowork Mode")
- Python 3.8+ installed locally
- python-docx library (
pip install python-docx)
Important Disclaimers
- AI-Generated Content: This plugin uses AI (LLM) technology which can produce inaccurate or incomplete outputs. All content should be treated as a starting point and reviewed for accuracy before use.
- Not Professional Advice: Outputs do not constitute legal, financial, tax, medical, or other professional advice. Consult qualified professionals before making decisions based on generated content.
- No Compliance Guarantee: References to industry standards, regulations, or guidelines are for informational purposes only. This plugin does not guarantee compliance with any law or regulation. Users are responsible for verifying all outputs meet their specific regulatory requirements.
- No Endorsement or Affiliation: Mention of third-party products, standards, or organizations does not imply endorsement, partnership, or certification by those entities.
Ready to use Job Posting & Hiring Suite?
Download this free plugin and start using it in Claude today.
Need something different?
We build custom plugins tailored to your exact workflow.