Automating PostgreSQL and REST API Calls with Claude Code MCP Server Integration and Hooks
At first, I thought, "Isn't Claude Code just for helping write code?" That thinking completely changed the moment I combined MCP servers with Hooks. Watching Claude Code directly explore a PostgreSQL schema, generate SQL, send results to an audit log API, and fire off a Slack notification — all without a single human touch — was what did it.
After reading this article, you'll be able to build a pipeline that chains MCP (Model Context Protocol) and Hooks together to handle the entire loop — "write code → validate in DB → trigger deployment → notify the team" — as a single agent loop. Basic Python syntax and experience writing JSON config files are all you need. If you already have a development DB, the goal of this article is to get your first agent connected to your team's DB and running within 30 minutes.
According to Jellyfish's 2025 report, AI coding tool adoption jumped from 49.2% in January to 69% in October. In this wave, the gap between those who simply "use" these tools and those who "turn them into pipelines" is only going to widen.
Core Concepts
MCP: A USB-C Connector for AI
MCP (Model Context Protocol) is a protocol that Anthropic open-sourced in November 2024. In a single sentence: it's a "standard interface between AI clients and external tools." Just as USB-C connects chargers, monitors, and storage devices through a single port, MCP lets Claude Code connect to PostgreSQL, REST APIs, GitHub, Slack, and any other data source in a uniform way.
MCP (Model Context Protocol): A communication standard between AI clients and external tools and data sources. It operates on top of JSON-RPC 2.0 (a remote procedure call convention that structures requests and responses between client and server), using
stdiotransport in local environments andHTTP SSE(Server-Sent Events, a mechanism where the server pushes a real-time stream to the client) for remote cloud services.
As of early 2026, Google, Microsoft, and OpenAI have all adopted MCP, making it a de facto industry standard, with the number of official and community MCP servers surpassing several thousand. This means the ecosystem has already reached a sufficient level of maturity.
// .claude/settings.json — MCP server registration example
{
"mcpServers": {
"postgres": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@crystaldba/postgres-mcp", "--db-url", "postgresql://user:pass@localhost:5432/mydb"]
},
"github": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"]
},
"slack": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-slack"]
}
}
}With just this one configuration, Claude Code gains the ability to read PostgreSQL schemas, open GitHub PRs, and send Slack messages.
Hooks: Deterministic Automation That Guarantees Execution
There's something worth being honest about here. It might seem like having MCP alone means Claude will handle everything automatically, but LLMs are fundamentally probabilistic. Even if you write "log an audit trail after the query" in a prompt, there's always a chance Claude will skip it.
Hooks solve this problem. These are user-defined handlers registered in settings.json that let you attach automation to Claude's lifecycle events with 100% guaranteed execution.
| Event | When It Fires | Key Uses |
|---|---|---|
PreToolUse |
Immediately before a tool runs | Security checks, blocking dangerous commands |
PostToolUse |
Immediately after a tool runs | Audit logging, CI/CD triggers, notifications |
Stop |
When the agent finishes responding | Sending final reports, cleanup tasks |
StopFailure |
When the agent exits due to failure | Error notifications, rollback triggers |
Key distinction: Hooks are not something the LLM "chooses" — they are execution guaranteed by the system. They are a perfect fit for logic that "must always run," such as production audit trails or security guardrails.
Hook handlers support four types:
| Handler Type | Description |
|---|---|
command |
Run a shell script |
http |
Call an HTTP endpoint |
prompt |
Execute an LLM prompt |
agent |
Run a sub-agent |
A Bird's-Eye View of the Full Architecture
Now that we have the concepts down, seeing how these three layers actually fit together makes everything much clearer at a glance.
[Claude Code Agent]
↕ (MCP Transport: stdio / HTTP SSE)
[MCP Server] ← PostgreSQL, REST API, GitHub, Slack, etc.
↕ (Hooks: PreToolUse / PostToolUse / Stop)
[Automation Scripts / External Webhooks]When these three layers interlock, you get a complete pipeline — "write code → validate in DB → trigger deployment → notify the team" — running as a single loop without human intervention.
Practical Application
How to read this: Examples 1–3 can each be applied independently. Example 4 flows much more naturally if you understand the structure from examples 1–3 first.
Example 1: Automated PostgreSQL Audit Log Pipeline
This is a situation that comes up often in real-world work: you need to track who ran a DB query, when, and with what intent. The first mistake I made when setting this up was concatenating $CLAUDE_TOOL_INPUT directly as a shell string — which immediately exposes you to command injection. I switched to parsing it through Python instead.
This configuration lets Claude access PostgreSQL directly via MCP, while automatically forwarding all query results to an audit log API.
// .claude/settings.json
{
"mcpServers": {
"postgres": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@crystaldba/postgres-mcp", "--db-url", "postgresql://readonly_user:pass@localhost/mydb", "--mode", "restricted"]
}
},
"hooks": {
"PostToolUse": [
{
"matcher": "mcp__postgres__query",
"hooks": [
{
"type": "command",
"command": "python3 scripts/log_query.py '$CLAUDE_TOOL_RESPONSE'"
}
]
}
]
}
}# scripts/log_query.py
import sys
import json
import os
import urllib.request
from datetime import datetime, timezone
def main():
raw = sys.argv[1] if len(sys.argv) > 1 else ""
try:
response_data = json.loads(raw)
except json.JSONDecodeError:
response_data = {"raw": raw}
payload = json.dumps({
"tool": "postgres_query",
"result": response_data,
"timestamp": datetime.now(timezone.utc).isoformat()
}).encode("utf-8")
req = urllib.request.Request(
os.environ.get("AUDIT_LOG_URL", "https://audit.example.com/log"),
data=payload,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {os.environ['AUDIT_API_TOKEN']}"
},
method="POST"
)
try:
urllib.request.urlopen(req)
except Exception as e:
# If audit log failures are silently swallowed, the security purpose is undermined
print(f"[AUDIT ERROR] Failed to send audit log: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()| Component | Role |
|---|---|
--mode restricted |
Limits the production DB to read-only access |
matcher: mcp__postgres__query |
Applies the hook only to query tool calls on the postgres MCP |
| Python parsing | Safe input handling to prevent shell injection |
AUDIT_API_TOKEN environment variable |
Avoids hardcoding the token in the config file |
try/except + stderr |
Explicit error output so audit failures are never silently swallowed |
Example 2: Detect Code Changes → Automatically Trigger CI/CD
If you've found it tedious to manually call a deployment webhook every time you modify a file, this pattern is exactly what you need. The moment Claude modifies a file under src/, it automatically wakes up the CI/CD pipeline.
#!/bin/bash
# scripts/trigger_deploy.sh — registered as a PostToolUse hook
# Parse $CLAUDE_TOOL_INPUT with jq to extract the changed file path
CHANGED_PATH=$(echo "$CLAUDE_TOOL_INPUT" | jq -r '.path // empty')
if [ -z "$CHANGED_PATH" ]; then
exit 0
fi
if echo "$CHANGED_PATH" | grep -q "^src/"; then
curl -s -X POST "https://ci.example.com/webhooks/deploy" \
-H "Authorization: Bearer ${CI_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"file\": $(echo "$CHANGED_PATH" | jq -R .), \"triggered_by\": \"claude-code\"}"
echo "CI triggered for: $CHANGED_PATH"
fi// settings.json — attach the hook to the file write tool
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "bash scripts/trigger_deploy.sh"
}
]
}
]
}
}Example 3: Proactively Block Dangerous Commands with PreToolUse
This confused me at first too — but if you set exitCodeControl: true in a PreToolUse hook and return exit 2, the tool execution itself is blocked before Claude can run it. This is how you completely prevent commands like DROP TABLE from ever reaching a production DB.
# scripts/security_check.py
import sys
import json
import re
import os
DANGEROUS_PATTERNS = [
r"\bDROP\s+TABLE\b",
r"\bDROP\s+DATABASE\b",
r"\bTRUNCATE\b",
r"\bDELETE\s+FROM\b(?!.*WHERE)", # DELETE without WHERE — simplified example
r"rm\s+-rf\s+/",
]
# Note: The DELETE pattern above is a simplified example.
# It may produce false positives with multi-line queries or subqueries.
# For production use, an AST-based SQL parser is recommended.
def main():
tool_input_raw = os.environ.get("CLAUDE_TOOL_INPUT", "")
try:
tool_input = json.loads(tool_input_raw)
except json.JSONDecodeError:
tool_input = {}
command = tool_input.get("command", "") or tool_input.get("query", "")
for pattern in DANGEROUS_PATTERNS:
if re.search(pattern, command, re.IGNORECASE):
print(f"[BLOCKED] Dangerous pattern detected: {pattern}", file=sys.stderr)
sys.exit(2) # exit 2 = block tool execution
sys.exit(0)
if __name__ == "__main__":
main(){
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "python3 scripts/security_check.py",
"exitCodeControl": true
}
]
},
{
"matcher": "mcp__postgres__query",
"hooks": [
{
"type": "command",
"command": "python3 scripts/security_check.py",
"exitCodeControl": true
}
]
}
]
}
}If you've read this far, you already have everything you need to run a basic pipeline. The config files and scripts in examples 1–3 are independent of each other, so you can pick whichever you need and apply it right away.
Example 4: A 3-Stage Multi-Agent Production Pipeline
At scale, a multi-agent architecture with separated roles is far more powerful than a single agent. The "acceptance criteria" and "platform constraints" mentioned here simply mean "the criteria for considering a feature complete" and "the limitations of the deployment environment."
Below is a pattern where separate agents handle planning, review, and implementation, with Hooks wiring them together.
[pm-spec agent]
→ Parse requirements, write acceptance criteria
→ Output: save spec.md
↓
PostToolUse hook (detect Write tool → confirm spec.md created)
↓
[architect-review agent]
→ Validate architecture, check platform constraints
→ Output: save architecture.md
↓
Stop hook → notify team channel via Slack MCP that review is complete
↓
[implementer-tester agent]
→ Write code + run tests
→ StopFailure hook → auto-create GitHub Issue on failureThe simplest approach for passing data between agents is to use spec.md as a shared file. In the Hooks config, a Write tool matcher detects when spec.md is created and triggers the next agent.
// settings.json — trigger the next agent when spec.md is created
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "bash scripts/check_and_trigger_architect.sh"
}
]
}
]
}
}#!/bin/bash
# scripts/check_and_trigger_architect.sh
WRITTEN_PATH=$(echo "$CLAUDE_TOOL_INPUT" | jq -r '.path // empty')
# Only trigger the next stage if spec.md was created
if [ "$WRITTEN_PATH" = "spec.md" ]; then
echo "[PIPELINE] spec.md creation detected → starting architect-review agent"
# The actual agent trigger method depends on your team's CI/CD environment
curl -s -X POST "https://ci.example.com/trigger/architect-review" \
-H "Authorization: Bearer ${CI_TOKEN}"
fiA deeper dive into multi-agent implementation details — inter-agent context sharing and failure recovery strategies — is planned for a future article.
Pros and Cons
Advantages
| Item | Details |
|---|---|
| Deterministic automation | 100% guaranteed execution via Hooks, without relying on LLM judgment |
| Standardized integration | Connect DBs, APIs, and SaaS tools through a single interface using the MCP standard |
| Audit trails | All tool inputs and outputs can be captured via hooks for automatic logging |
| Access control | Proactively block dangerous operations with PreToolUse hooks |
| Mature ecosystem | Thousands of official and community MCP servers ready to use immediately |
Drawbacks and Caveats
| Item | Details | Mitigation |
|---|---|---|
| Expanded attack surface | The trust boundary grows with each MCP server connection | Register only the minimum necessary MCP servers; verify sources |
| Tool Poisoning | Malicious MCP servers can embed hidden prompts in tool descriptions | Directly review tool descriptions or source code from third-party servers |
| Indirect prompt injection | Risk that Claude executes malicious instructions embedded in DB or API response data | Design so that external data is never directly included in the prompt context |
| CVE vulnerabilities | CVE-2025-54794 (path bypass), CVE-2025-54795 (command injection), CVE-2025-59536 (API token exfiltration) | Keep Claude Code and MCP servers updated to the latest version |
| Increased complexity | The more hooks and MCP servers, the harder debugging and maintenance become | Keep hook scripts simple; establish a centralized log collection system |
| Operational dependencies | An external MCP server outage can bring the entire pipeline down | Set timeouts; have fallback logic ready for failures |
Of these, the one I've encountered most often in practice is indirect prompt injection — cases where DB response data contains something like "now execute this command." It happens more often than you'd expect and is tricky to debug. The key is a design that strictly separates external data from the prompt context.
Tool Poisoning: An attack where a malicious MCP server embeds hidden prompts inside the tool's description field, causing Claude to perform unintended actions. When using third-party MCP servers, it's a good idea to directly review the source code or tool descriptions.
Cautions: The Most Common Mistakes in Practice
-
Concatenating
$CLAUDE_TOOL_INPUTdirectly as a shell string in hook scripts — Always parse it through Python or jq. Directly concatenating the string in the shell exposes you to command injection. -
Connecting the Postgres MCP to a production DB in read-write mode — In production environments, always set
--mode restricted(read-only) to be safe. During testing, Claude can unexpectedly execute INSERT or UPDATE statements. -
Hardcoding MCP server authentication tokens in
settings.json—settings.jsonlives in the.claude/directory and is often shared in team repositories. DB URLs and API tokens must always be separated out and managed as environment variables.
Closing Thoughts
Having actually run this pipeline in production, the thing that struck me most was this: Hooks turn "things Claude might forget" into "things the system will never omit." Instead of worrying about the probabilistic nature of LLMs, laying down a deterministic safety net lets Claude make judgments far more freely on top of it.
There's no need to build a grand multi-agent pipeline from the start. I'd encourage you to start small and experience it firsthand.
Three steps you can start with right now:
-
Try connecting the PostgreSQL MCP first — Register
npx -y @crystaldba/postgres-mcpin.claude/settings.json, then type "show me the schema for the users table" in Claude Code. If you already have a development DB, five minutes is all it takes. -
Add simple logging with a PostToolUse hook — Register a
matcher: "mcp__postgres__query"hook and modify example 1'slog_query.pyto save results to a local file. Watching query results accumulate in the file makes how hooks work intuitively clear. -
Attach a PreToolUse security guardrail — Take example 3's
security_check.pyas-is and connect it to the Bash tool. Seeing theexit 2block actually work in practice will give you a much stronger sense of confidence in this architecture.
References
- Claude Code MCP Official Docs — Connect Claude Code to tools via MCP
- Claude Code Hooks Official Reference — Hooks reference
- Automate workflows with hooks — Claude Code Docs
- Model Context Protocol Official Architecture Docs
- postgres-mcp GitHub — crystaldba/postgres-mcp
- Claude Code Security Official Docs
- Check Point Research — RCE and API Token Exfiltration Through Claude Code (CVE-2025-59536)
- The Vulnerable MCP Project — MCP Security Vulnerability Database
- Claude Code Hooks: A Practical Guide to Workflow Automation — DataCamp
- awesome-claude-code — GitHub