Contact us
Core Product

Agent SDK

A lightweight sidecar that runs inside your agent framework. Captures every tool call, every sub-agent spawn, and every internal prompt — before anything leaves your perimeter.

Install & integrate in 3 lines
# Install
pip install bladerun

# Import and initialize
from bladerun import BladeRun, secure_llm

client = BladeRun(api_key="br_...")

# Wrap your LLM calls — that's it
@secure_llm
def chat(prompt):
    return openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )

# Every call is now logged, scanned, and policy-enforced

SDK capabilities

Deep visibility into internal AI operations without code changes

Fine-Tuned Model Monitoring

Track every interaction with custom and fine-tuned models. Detect unauthorized model usage, unexpected behavioral changes, and adversarial inputs targeting your proprietary models.

Agent-to-Agent Visibility

When an orchestrator spawns child agents and passes its tools and permissions down, the SDK tracks the full chain. You authorized one agent — it became ten. Now you can see all of them.

Model Registry Integration

Connect to AWS Bedrock, Google Vertex, Azure ML, and Hugging Face. Unified visibility across every model your organization uses, regardless of where it runs.

Zero Performance Impact

Async telemetry with minimal overhead. The SDK captures everything without adding latency to your agent operations or blocking tool calls.

Works with your stack

Native integrations for popular AI frameworks and orchestrators

LangChain LlamaIndex CrewAI AutoGen Haystack Semantic Kernel OpenAI SDK Anthropic SDK AWS Bedrock Google Vertex

Instrument your AI in minutes

One import. Full visibility. No code changes required.

Contact us Read documentation