API Reference
API Reference
Complete API reference for the Klira AI SDK. This section provides detailed documentation for all classes, methods, types, and utilities available in the SDK.
Quick Navigation
| Component | Description | Key Features |
|---|---|---|
| Klira AI Class | Main SDK interface | Initialization, configuration, client management |
| Decorators | Function & class decorators | @workflow, @task, @agent, @tool, @guardrails |
| GuardrailsEngine | Policy enforcement engine | Message processing, output checking, policy evaluation |
| Configuration | Configuration management | Environment variables, validation, global config |
| Types | Type definitions & protocols | Data structures, protocols, enums, exceptions |
| Utilities | Helper functions & tools | Framework detection, caching, performance monitoring |
Core SDK Components
Klira AI Class
The main entry point for the SDK. Handles initialization, configuration, and provides access to core components.
from klira.sdk import Klira
# Initialize SDKklira = Klira.init( app_name="MyApp", api_key="your-api-key")
# Access componentsguardrails = Klira.get_guardrails()client = Klira.get()-> Full Klira AI Class API Reference
Decorators
Universal decorators that automatically adapt to any LLM framework. Add observability, tracing, and governance to your functions.
from klira.sdk.decorators import workflow, guardrails
@workflow(name="chat_workflow")@guardrails(check_input=True, check_output=True)def process_chat(user_input: str) -> str: # Your workflow logic return ai_response(user_input)-> Full Decorators API Reference
GuardrailsEngine
Multi-layer policy enforcement system that processes messages, evaluates policies, and ensures compliance.
from klira.sdk import Klira
guardrails = Klira.get_guardrails()
# Process user inputresult = await guardrails.process_message( "Can you help me with my account?", context={"user_id": "user_123", "domain": "customer_service"})
if result['allowed']: # Process the approved message response = handle_approved_message(message)else: # Handle policy violation response = result.get('response', 'Request blocked by policy')-> Full GuardrailsEngine API Reference
Configuration
Centralized configuration management with environment variable support and validation.
from klira.sdk.config import KliraConfig, set_config
# Create configuration from environmentconfig = KliraConfig.from_env()
# Validate configurationerrors = config.validate()if not errors: set_config(config)-> Full Configuration API Reference
Framework Integration
Supported Frameworks
The Klira AI SDK automatically detects and integrates with multiple LLM frameworks:
| Framework | Package | Automatic Detection | Key Features |
|---|---|---|---|
| OpenAI Agents SDK | agents | Agent, Runner, function_tool | Native function tools, conversation management |
| LangChain | langchain | AgentExecutor, chains, tools | Agent executors, callback handlers, tool chains |
| CrewAI | crewai | Agent, Task, Crew | Multi-agent workflows, task management |
| LlamaIndex | llama_index | Query engines, chat engines | Document queries, chat interfaces |
Framework Detection
from klira.sdk.utils.framework_detection import detect_framework
def create_agent(): # Framework automatically detected based on return type return Agent(name="Assistant", instructions="Be helpful")
framework = detect_framework(create_agent)print(framework) # Output: "agents_sdk"-> Framework Detection Utilities
Type System
Core Types
from klira.sdk.types import ( Decision, GuardrailProcessingResult, GuardrailOutputCheckResult, KliraContext, Policy, ViolationMode)
# Simple decisiondecision = Decision(allowed=True, confidence=0.95)
# Comprehensive resultresult = GuardrailProcessingResult( allowed=False, confidence=0.98, decision_layer="fast_rules", violated_policies=["profanity_filter"], blocked_reason="Inappropriate content detected")Protocols
from klira.sdk.types import LLMServiceProtocol, FrameworkAdapterProtocol
# Custom LLM serviceclass MyLLMService: async def evaluate_policy(self, message: str, policy: dict) -> dict: # Implementation return {"allowed": True, "confidence": 0.9}
# Protocol ensures type safetyservice: LLMServiceProtocol = MyLLMService()Utilities
Framework Detection
Automatically identify LLM frameworks in use.
Caching
High-performance caching for framework detection, policy evaluation, and LLM results.
Performance Monitoring
Built-in performance monitoring and timing utilities.
Error Handling
Standardized error handling with graceful degradation.
Context Management
Hierarchical context management for distributed tracing.
from klira.sdk.tracing import set_klira_contextfrom klira.sdk.utils.performance import Timer, performance_monitor
# Set distributed tracing contextset_klira_context( user_id="user_123", conversation_id="conv_123")
# Monitor performance@performance_monitor(log_threshold_ms=100)def expensive_operation(): with Timer("data_processing"): return process_large_dataset()-> Full Utilities API Reference
Common Usage Patterns
Basic SDK Setup
import osfrom klira.sdk import Klirafrom klira.sdk.decorators import workflow, guardrails
# Initialize SDKklira = Klira.init( app_name="MyApplication", api_key=os.getenv("KLIRA_API_KEY"),)
@workflow(name="secure_chat", user_id="user_123")@guardrails(check_input=True, check_output=True)def secure_chat_handler(user_input: str) -> str: # Your secure chat logic return process_with_llm(user_input)Advanced Configuration
from klira.sdk.config import KliraConfig, set_config
# Production configurationconfig = KliraConfig.from_env( app_name="ProductionApp", debug_mode=False, trace_content=False, # Privacy in production framework_detection_cache_size=10000)
# Validate and applyerrors = config.validate()if not errors: set_config(config)else: print(f"Configuration errors: {errors}")Custom Guardrails Integration
from klira.sdk import Klirafrom klira.sdk.types import GuardrailProcessingResult
async def custom_message_handler(message: str, user_id: str) -> str: guardrails = Klira.get_guardrails()
# Process with context result: GuardrailProcessingResult = await guardrails.process_message( message, context={ "user_id": user_id, "domain": "general", "conversation_id": f"conv_{user_id}" } )
if result['error']: # Handle system error return "Service temporarily unavailable" elif not result['allowed']: # Handle policy violation return result.get('response', 'Request blocked by policy') else: # Process approved message return await process_approved_message(message)Framework-Specific Integration
OpenAI Agents SDK
from agents import Agent, function_toolfrom klira.sdk.decorators import agent, tool
@tool(name="calculator")@function_tool()def calculate(expression: str) -> str: return str(eval(expression))
@agent(name="math_assistant")def create_math_agent(): return Agent( name="MathBot", instructions="You are a helpful math assistant", tools=[calculate] )LangChain
from langchain.agents import create_openai_tools_agent, AgentExecutorfrom klira.sdk.decorators import workflow, tool
@tool(name="search_tool")class SearchTool(BaseTool): name = "web_search" description = "Search the web"
def _run(self, query: str) -> str: return search_web(query)
@workflow(name="langchain_agent")def create_agent_executor(): agent = create_openai_tools_agent(llm, [SearchTool()], prompt) return AgentExecutor(agent=agent, tools=[SearchTool()])CrewAI
from crewai import Agent, Task, Crew, Processfrom klira.sdk.decorators import workflow, task
@task(name="research_task")def create_research_task(): return Task( description="Research the given topic thoroughly", expected_output="Comprehensive research report" )
@workflow(name="research_crew")def create_research_crew(): researcher = Agent(role="Researcher", goal="Find information") writer = Agent(role="Writer", goal="Write reports")
return Crew( agents=[researcher, writer], tasks=[create_research_task()], process=Process.sequential )Error Handling
Exception Types
from klira.sdk.types import ( KliraError, KliraPolicyViolation, KliraConfigError)
try: result = await guardrails.process_message(message)except KliraPolicyViolation as e: print(f"Policy violation: {e.policy_id}")except KliraConfigError as e: print(f"Configuration error: {e.config_field}")except KliraError as e: print(f"SDK error: {e.error_code}")Graceful Error Handling
from klira.sdk.utils.error_handling import handle_errors
@handle_errors(fail_closed=False, default_return_on_error={"allowed": True})async def safe_guardrails_check(message: str): # If guardrails fail, default to allowing the message return await guardrails.process_message(message)Performance Best Practices
Async Processing
import asyncio
async def batch_process_messages(messages: List[str]) -> List[dict]: guardrails = Klira.get_guardrails()
# Process messages in parallel tasks = [ guardrails.process_message(msg, context) for msg in messages ]
results = await asyncio.gather(*tasks, return_exceptions=True) return [r if not isinstance(r, Exception) else {"error": str(r)} for r in results]Performance Monitoring
from klira.sdk.utils.performance import performance_monitor, Timer
@performance_monitor(log_threshold_ms=200)async def monitored_workflow(data): with Timer("preprocessing") as prep: processed = preprocess(data)
with Timer("main_processing") as main: result = await main_processing(processed)
logger.info(f"Timing - prep: {prep.elapsed_ms}ms, main: {main.elapsed_ms}ms") return resultEnvironment Configuration
Development Environment
export KLIRA_API_KEY="klira_dev_your_key"export KLIRA_DEBUG="true"export KLIRA_VERBOSE="true"export KLIRA_TRACE_CONTENT="true"export KLIRA_POLICIES_PATH="./dev-policies"Production Environment
export KLIRA_API_KEY="klira_prod_your_key"export KLIRA_DEBUG="false"export KLIRA_VERBOSE="false"export KLIRA_TRACE_CONTENT="false"export KLIRA_TELEMETRY="false"export KLIRA_FRAMEWORK_CACHE_SIZE="10000"export KLIRA_POLICIES_PATH="/app/policies"-> Complete Environment Variables Reference
Related Documentation
- Getting Started Guide - SDK introduction and basic setup
- Framework Integration - Framework-specific integration guides
- Guardrails Documentation - Policy enforcement and configuration
- Advanced Configuration - Production deployment and scaling
- Observability - Monitoring and analytics
Need help? Check our GitHub repository or visit hub.getklira.com for support.