Getting Started with Klira AI SDK
Getting Started with Klira AI SDK
The Universal LLM Observability & Governance Platform
Klira AI SDK is the industry-standard solution for LLM observability, tracing, and governance. Trusted by leading AI teams, Klira AI provides comprehensive monitoring and policy enforcement across any LLM framework with zero configuration required.
Why Klira AI SDK?
Universal Framework Integration
Unlike point solutions that require framework-specific implementations, Klira AI SDK provides a single, unified API that works seamlessly across:
- OpenAI Agents - Native integration with OpenAI’s agent framework
- LangChain - Full ecosystem support including LangGraph and LangSmith
- CrewAI - Multi-agent workflow monitoring and governance
- LlamaIndex - RAG application observability and policy enforcement
- Custom Frameworks - Extensible adapter system for any LLM implementation
Enterprise-Grade Observability
Built on OpenTelemetry standards, Klira AI SDK delivers production-ready observability:
import osfrom klira.sdk import Klirafrom klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Initialize once, monitor everythingklira = Klira.init( app_name="production-app", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)
# Set user contextset_hierarchy_context(user_id="user_123")
@workflow(name="user_interaction", organization_id="acme", project_id="ai-assistant")@guardrails()def handle_user_request(request: str) -> str: # Your LLM logic - automatically monitored and governed return llm_responseResult: Complete visibility into your LLM operations with distributed tracing, performance metrics, and policy enforcement.
Multi-Layer Governance
Klira AI’s policy enforcement system provides comprehensive governance through three evaluation layers:
- Fast Rules Engine - Regex-based pattern matching for immediate policy decisions
- Policy Augmentation - Context-aware prompt enhancement with compliance guidelines
- LLM Fallback - Advanced LLM-based evaluation for complex policy scenarios
Quick Start Paths
[Fast] 5-Minute Quick Start
Get Klira AI SDK running in your application immediately. Perfect for evaluation and proof-of-concept implementations.
Ideal for: Initial evaluation, demos, rapid prototyping
[Package] Complete Installation Guide
Comprehensive setup covering production environments, framework-specific configurations, and enterprise deployment patterns.
Ideal for: Production deployments, enterprise environments, complex integrations
[Launch] End-to-End Tutorial
Build a complete LLM application with monitoring and governance from scratch. Includes best practices, error handling, and production considerations.
Ideal for: Learning core concepts, understanding best practices, building production applications
[Config] Configuration Reference
Essential configuration patterns for different deployment scenarios, from development to enterprise production.
Ideal for: DevOps teams, production deployments, compliance requirements
Architecture Overview
Klira AI SDK implements a clean, layered architecture that integrates seamlessly with your existing infrastructure:

Key Capabilities
Automatic Framework Detection
Klira AI SDK automatically detects and configures appropriate adapters for your LLM framework:
import osfrom klira.sdk.decorators import workflowfrom klira.sdk.utils.context import set_hierarchy_context
# Set user contextset_hierarchy_context(user_id="user_123")
# Works with any framework - no manual configuration required@workflow(name="langchain_workflow")def process_with_langchain(): # LangChain implementation return agent_executor.invoke({"input": query})
@workflow(name="crewai_workflow")def process_with_crewai(): # CrewAI implementation return crew.kickoff({"topic": topic})Hierarchical Context Management
Organize your LLM operations with enterprise-grade context hierarchy:
Organization -> Project -> Agent -> Conversation -> UserThis structure enables:
- Cost Attribution - Track spending by business unit
- Performance Analysis - Identify bottlenecks across teams
- Compliance Reporting - Generate audit trails by organization
- Access Control - Implement role-based policy enforcement
Production-Ready Governance
Implement comprehensive AI governance with minimal code changes:
from klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Set user contextset_hierarchy_context(user_id="user_123")
@workflow(name="financial_advisor")@guardrails() # Automatic compliance enforcementdef provide_financial_guidance(query: str) -> str: # Automatic policy enforcement for financial regulations return financial_llm_response(query)Enterprise Use Cases
Financial Services
- Compliance Monitoring - Ensure responses meet regulatory requirements
- Risk Management - Detect and prevent inappropriate financial advice
- Audit Trails - Complete traceability for regulatory reporting
Healthcare
- HIPAA Compliance - Protect patient data in LLM interactions
- Medical Accuracy - Prevent medical advice from general-purpose models
- Access Control - Role-based policy enforcement
Legal Technology
- Confidentiality Protection - Prevent disclosure of sensitive information
- Professional Standards - Ensure responses meet legal professional standards
- Client Attribution - Track usage and costs by client matter
Integration Patterns
Microservices Architecture
from klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Set user contextset_hierarchy_context(user_id="user_123")
# Service A - Customer Support@workflow(name="customer_support", organization_id="support", project_id="chatbot")@guardrails()def handle_support_request(request: str) -> str: return support_response(request)
# Service B - Sales Assistant@workflow(name="sales_assistant", organization_id="sales", project_id="lead_qualification")@guardrails()def qualify_lead(conversation: str) -> Dict: return lead_analysis(conversation)Event-Driven Systems
from klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Set user contextset_hierarchy_context(user_id="user_123")
@workflow(name="process_user_event")@guardrails()async def handle_user_event(event: UserEvent) -> None: # Async processing with full observability response = await llm_processor.process(event.content) await event_bus.publish(ProcessedEvent(response))Batch Processing
from klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Set user contextset_hierarchy_context(user_id="user_123")
@workflow(name="batch_document_processing")@guardrails()def process_document_batch(documents: List[Document]) -> List[ProcessedDocument]: # Batch processing with individual item tracing return [process_document(doc) for doc in documents]Performance & Scale
Benchmarked Performance
- Latency Overhead: < 2ms per operation
- Memory Footprint: < 10MB baseline
- Throughput: Tested at 10,000+ requests/second
- Concurrency: Full async/await support
Scalability Features
- Distributed Caching - Redis integration for multi-instance deployments
- Load Balancing - Stateless design for horizontal scaling
- Resource Management - Configurable resource limits and throttling
Next Steps
Choose your integration path based on your needs:
| Use Case | Recommended Path | Time Investment |
|---|---|---|
| Evaluation | Quick Start | 5 minutes |
| Development | Complete Tutorial | 30 minutes |
| Production | Installation Guide + Configuration | 2 hours |
| Enterprise | All guides + Advanced Configuration | 1 day |
Advanced Topics
After completing the getting started guides, explore:
- Core Concepts - Deep dive into SDK architecture
- Framework Integration - Framework-specific patterns
- Policy Engineering - Advanced governance strategies
- Production Deployment - Enterprise deployment patterns
Support & Community
Enterprise Support
- Technical Support - Direct access to engineering team
- Implementation Services - Guided integration and optimization
- Custom Adapters - Framework-specific integration development
- Compliance Consulting - Regulatory requirement implementation
Community Resources
- Documentation - Comprehensive guides and API reference
- GitHub - Open source contributions and issue tracking
- Blog - Best practices and case studies
Ready to implement world-class LLM observability?
Klira AI SDK - Trusted by leading AI teams worldwide