SpiderShield mascot

Secure your AI agents.

Runtime security for agent tool execution

Intercept every tool call before it runs.
Block dangerous filesystem, shell, network, and database access.

Open source. MIT licensed.

python
from spidershield import SpiderGuard, Decision

guard = SpiderGuard(policy="balanced")

# Agent attempts a dangerous command
result = guard.check("execute_shell", {
    "command": "rm -rf /"
})
# result.decision == Decision.DENY
# result.reason == "Destructive command blocked"
Works with:LangChainOpenAI AgentsCrewAIAutoGenMCP Servers

Runtime Guard

Enforce security policies on every agent tool call in real time.

  • Filesystem protection
  • Shell command restrictions
  • Network access control
  • Database operation limits
Learn more

Data Protection

Detect and redact secrets, PII, and prompt injection in tool outputs.

  • API keys & tokens
  • Credit card numbers
  • SSN & personal data
  • Prompt injection patterns
Learn more

Trust Intelligence

Know which MCP servers are safe before you deploy them.

  • Security scores (A-F)
  • 46 issue code checks
  • Trust graph analysis
  • Dependency scanning
SpiderRating

Why generic guardrails aren't enough

LLM guardrails protect prompts. SpiderShield protects tool execution.

AttackWithout SpiderShieldWith SpiderShield
Agent calls rm -rf /Executed. System destroyed.DENIED: destructive command blocked
Agent reads /etc/passwdFile contents leaked.DENIED: system file access blocked
Agent sends data to C2 serverData exfiltrated silently.DENIED: suspicious network target
Tool output contains API keysKeys exposed in response.REDACTED: secret pattern detected
Agent installs untrusted MCP serverArbitrary code runs unchecked.DENIED: unverified MCP server

Where SpiderShield fits

Policy enforcement before every tool execution.

AI Agent Framework

LangChain / OpenAI / CrewAI / AutoGen

SpiderShield Guard
Policy EngineALLOW / DENY / ESCALATE
DLP ScannerPII, Secrets, Injection
Audit LoggerJSONL, queryable
Tool Execution

MCP Servers / APIs / Shell / Filesystem

Get started in 3 lines

python
pip install spidershield

from spidershield import SpiderGuard, Decision

guard = SpiderGuard(policy="balanced", dlp="redact")

# Before tool execution
result = guard.check("execute_sql",
    {"query": "DROP TABLE users"})
assert result.decision == Decision.DENY

# After tool execution — scan output for secrets
clean = guard.after_check("read_file", raw_output)
# API keys automatically redacted
Works with:LangChainOpenAI AgentsCrewAIAutoGenMCP Servers
46
Security Rules
647
Tests Passing
3,500+
Servers Rated
MIT
License

What People Say

View all →

SpiderShield catches issues that generic guardrails completely miss.

@security_dev

Finally a tool that scans MCP source code, not just metadata.

@mcp_builder

We blocked 3 prompt injection attempts in the first week.

@ai_team_lead

SpiderRating

Security Index for the MCP Ecosystem

3,500+ MCP servers scanned with SpiderShield. Security scores, issue codes, and trust data.

Top Rated

#1github-mcp-serverB7.67Verified
#2filesystem-mcp-serverB7.50Verified
#3brave-search-mcpB7.32Verified
#4postgres-mcp-serverB7.15Verified
#5sqlite-mcp-serverC6.84

Open Source

SpiderShield is fully open source. MIT license.

  • Runtime guard
  • Security scanner (46 rules)
  • Policy engine (3 presets)
  • DLP scanner
  • CLI tools
View on GitHub
Coming Soon

SpiderShield Cloud

Enterprise security for AI agents. Know who executed what tool, when, and why.

  • Security telemetry & dashboards
  • Central policy control
  • Audit logs & compliance
  • Incident investigation
  • Team RBAC & SSO
Request Early Access →