GEO + Crew hook bundles, auto-audit content edits, auto-activate persona on session start
Stop hook with file-pattern filter triggers GEO audit when you edit markdown. SessionStart hook activates the right Crew persona based on cwd. Two bundles, two MCPs, both production-ready.
GEO + Crew hook bundles, auto-audit content edits, auto-activate persona on session start
Two more SaaS-MCP bundles, similar pattern. GEO does AI-visibility scoring across LLMs (geo.studiomeyer.io, 24 tools). Crew is persona-switching (crew.studiomeyer.io, 10 tools, 8 personas). Both have one obvious hook each.
Step 1: GEO, Stop hook with file-filter triggers audit
When you edit content (markdown files in /blog, /recipes, /lessons, etc.), you should know if the edit improved or hurt your AI-visibility score. The Stop hook with an if-filter on *.md and *.mdx does exactly that:
{
"Stop": [
{
"hooks": [
{
"type": "mcp_tool",
"server": "studiomeyer-geo",
"tool": "geo_check",
"input": {
"url": "https://studiomeyer.academy",
"brand": "StudioMeyer Academy"
},
"timeout": 60,
"statusMessage": "GEO: auto-audit after content edit..."
}
],
"if": "Edit(*.md|*.mdx)|Write(*.md|*.mdx)"
}
]
}
The if-filter ensures the hook only fires on Stop AFTER an Edit/Write to a markdown file in the session. Pure code edits don't trigger a GEO check.
geo_check queries 8 LLM platforms (Claude, GPT, Gemini, Perplexity, Grok, Copilot, Bing, You.com) for brand mentions on the given URL and returns a 0-100 score. Default mode: training (uses LLM training-data recall, free). For native web-search, swap to mode: "search" (costs $0.30-0.50/run depending on provider).
Step 2: GEO, UserPromptSubmit on visibility questions
Less aggressive than Memory's auto-recall: only fire geo_simulate when the user explicitly asks about visibility:
{
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "/home/simple/.claude/hooks/geo-trigger-check.sh",
"timeout": 3
}
]
}
]
}
Where geo-trigger-check.sh is a small bash hook:
#!/usr/bin/env bash
input=$(cat)
prompt=$(echo "$input" | jq -r '.prompt // .user_prompt // ""')
if echo "$prompt" | grep -qiE 'how visible|visibility|geo score|cited by|llm cite'; then
echo '{"hookSpecificOutput":{"additionalContext":"GEO context: if checking visibility, call geo_simulate(url) for a free zero-key estimate or geo_check(url, brand) for full LLM coverage."}}'
fi
Why bash and not mcp_tool? Because geo_simulate needs a URL parameter that we cannot reliably parse from the user prompt. Better to inject context that nudges the assistant to call the tool with the right arguments.
Step 3: Crew, SessionStart activates default persona per project
Different projects, different personas. The Crew bundle picks the right one automatically based on cwd:
{
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "/home/simple/.claude/hooks/crew-auto-persona.sh",
"timeout": 5
}
]
}
]
}
Where crew-auto-persona.sh maps directory patterns to personas:
#!/usr/bin/env bash
cwd="${PWD:-$(pwd)}"
case "$cwd" in
*studiomeyer*|*academy*|*nex-hq*) persona="cto" ;;
*aklow*|*kunden*) persona="pm" ;;
*blog*|*content*) persona="cmo" ;;
*legal*|*agb*|*datenschutz*) persona="legal-advisor" ;;
*) persona="cto" ;;
esac
cat <<EOF
{
"hookSpecificOutput": {
"additionalContext": "Crew suggestion: cwd matches persona '$persona'. Call crew_activate({persona: '$persona'}) to load the focused-expert system prompt for this domain."
}
}
EOF
Why a bash hook + context-injection instead of mcp_tool directly calling crew_activate? Because activating a persona changes the entire system-prompt mid-session, which is a bigger commitment than a transparent tool call. Better to nudge, let the assistant decide whether to actually activate based on what the user does next.
Step 4: Crew, Stop hook captures feedback
When the session ends, log a short feedback note so the next session starts with continuity:
{
"Stop": [
{
"hooks": [
{
"type": "mcp_tool",
"server": "studiomeyer-crew",
"tool": "crew_feedback",
"input": {
"rating": 4,
"note": "Session ended at ${cwd}",
"tags": ["auto-stop-hook"]
},
"timeout": 10,
"statusMessage": "Crew: logging feedback..."
}
]
}
]
}
This logs a default 4-star feedback. The user can override mid-session by calling crew_feedback explicitly with a different rating + note. Phase-1 Crew is in test mode so this is just data-collection, no Stripe gate.
Step 5: GDPR + idempotency notes
GEO:
- Idempotent.
geo_checkandgeo_simulateare deterministic for the same URL+brand+mode within a 1-hour cache window. Re-running 5 minutes apart returns identical scores. - Fast.
geo_simulateunder 2s.geo_check5-15s intrainingmode, 30-50s insearchmode (8 LLM API calls in parallel). - GDPR. GEO sends URL + brand to LLM provider APIs (OpenAI, Anthropic, Google, etc.). Document this in the recipe README. URL must be public, no internal pages.
Crew:
- Idempotent.
crew_activateswaps the system-prompt deterministically.crew_feedbackwrites one row per call, NOT idempotent. If Stop fires twice (e.g. user resumes), you get two feedback rows. Acceptable for now. - Fast. Both under 500ms.
- GDPR. Crew tenant data isolated per user. Persona definitions are public (free tier) or user-owned (Pro tier).
Step 6: Verify the bundles
# Edit a markdown file in /home/simple/academy/content
claude
# In claude: "edit /home/simple/academy/content/recipes/phase-1-foundation/1.1-install.mdx, add one sentence at the top"
# Ctrl-D
Check https://geo.studiomeyer.io/dashboard, a new geo_check run should be logged with the academy URL.
cd /home/simple/aklow
claude
# Should see: "Crew suggestion: cwd matches persona 'pm'..."
Done
Two more SaaS-MCPs are wired into your Claude Code lifecycle. Content edits trigger visibility checks. Project-switches surface the right persona.