A lightweight sidecar that runs inside your agent framework. Captures every tool call, every sub-agent spawn, and every internal prompt — before anything leaves your perimeter.
# Install pip install bladerun # Import and initialize from bladerun import BladeRun, secure_llm client = BladeRun(api_key="br_...") # Wrap your LLM calls — that's it @secure_llm def chat(prompt): return openai.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) # Every call is now logged, scanned, and policy-enforced
Deep visibility into internal AI operations without code changes
Track every interaction with custom and fine-tuned models. Detect unauthorized model usage, unexpected behavioral changes, and adversarial inputs targeting your proprietary models.
When an orchestrator spawns child agents and passes its tools and permissions down, the SDK tracks the full chain. You authorized one agent — it became ten. Now you can see all of them.
Connect to AWS Bedrock, Google Vertex, Azure ML, and Hugging Face. Unified visibility across every model your organization uses, regardless of where it runs.
Async telemetry with minimal overhead. The SDK captures everything without adding latency to your agent operations or blocking tool calls.
Native integrations for popular AI frameworks and orchestrators
One import. Full visibility. No code changes required.