Skip to content

Quick Start Guide

Quick Start Guide

Get up and running with Klira AI SDK in 5 minutes. This guide will have you monitoring and governing your first LLM application with minimal setup.

Prerequisites

  • Python 3.10 or higher
  • Basic familiarity with Python and LLM applications

Step 1: Install Klira AI SDK

Terminal window
pip install klira

Step 2: Get Your API Key

  1. Sign up at hub.getklira.com
  2. Navigate to your dashboard and copy your API key
  3. Set it as an environment variable:
Terminal window
# Windows PowerShell
$env:KLIRA_API_KEY="your-api-key-here"
# Linux/macOS
export KLIRA_API_KEY="your-api-key-here"

Step 3: Your First Monitored Function

Create a file called quickstart.py:

import os
from klira.sdk import Klira
from klira.sdk.decorators import workflow, guardrails
from klira.sdk.utils.context import set_hierarchy_context
# Initialize Klira AI SDK
klira = Klira.init(
app_name="QuickStartApp",
api_key=os.getenv("KLIRA_API_KEY"),
enabled=True
)
# Set user context for all decorated functions
set_hierarchy_context(user_id="user_123")
# Add monitoring and governance to any function
@workflow(
name="hello_world",
organization_id="quickstart_org",
project_id="demo_project"
)
@guardrails()
def hello_llm(user_input: str) -> str:
"""A simple function that mimics LLM behavior."""
# Your LLM logic would go here
# For this demo, we'll just return a simple response
if "weather" in user_input.lower():
return f"I'd be happy to help with weather information! However, I need your location to provide accurate weather data."
elif "hello" in user_input.lower():
return "Hello! How can I assist you today?"
else:
return f"You asked: '{user_input}'. I'm a demo function, but in a real app, this would go to an LLM."
# Test the function
if __name__ == "__main__":
# Test cases
test_inputs = [
"Hello there!",
"What's the weather like?",
"Tell me about quantum computing"
]
print("Testing Klira AI SDK integration...")
print("=" * 50)
for i, test_input in enumerate(test_inputs, 1):
print(f"\n{i}. Input: {test_input}")
result = hello_llm(test_input)
print(f" Output: {result}")
print("\nSuccess! Your function now has:")
print(" Distributed tracing")
print(" Performance monitoring")
print(" Policy enforcement")
print(" Automatic instrumentation")

Step 4: Run Your First Example

Terminal window
python quickstart.py

You should see output like:

Testing Klira AI SDK integration...
==================================================
1. Input: Hello there!
Output: Hello! How can I assist you today?
2. Input: What's the weather like?
Output: I'd be happy to help with weather information! However, I need your location to provide accurate weather data.
3. Input: Tell me about quantum computing
Output: You asked: 'Tell me about quantum computing'. I'm a demo function, but in a real app, this would go to an LLM.
Success! Your function now has:
Distributed tracing
Performance monitoring
Policy enforcement
Automatic instrumentation

Step 5: Add Real LLM Integration

Now let’s integrate with a real LLM. Here’s an example with OpenAI:

import os
import openai
from klira.sdk import Klira
from klira.sdk.decorators import workflow, guardrails
from klira.sdk.utils.context import set_hierarchy_context
# Initialize Klira AI
klira = Klira.init(
app_name="OpenAI-Demo",
api_key=os.getenv("KLIRA_API_KEY"),
enabled=True
)
# Set your OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY")
# Set user context for all decorated functions
set_hierarchy_context(user_id="user_123")
@workflow(
name="openai_chat",
organization_id="my_org",
project_id="openai_project"
)
@guardrails()
def chat_with_openai(user_message: str) -> str:
"""Chat with OpenAI with automatic monitoring and governance."""
try:
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message}
],
max_tokens=150
)
return response.choices[0].message.content
except Exception as e:
return f"Error: {str(e)}"
# Test with real LLM
if __name__ == "__main__":
test_message = "Explain what observability means in AI systems"
response = chat_with_openai(test_message)
print(f"Question: {test_message}")
print(f"Answer: {response}")

Step 6: Framework-Specific Examples

With LangChain

Terminal window
pip install klira[langchain]
import os
from klira.sdk import Klira
from klira.sdk.decorators import workflow, tool
from klira.sdk.utils.context import set_hierarchy_context
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.prompts import ChatPromptTemplate
# Initialize Klira AI
klira = Klira.init(
app_name="LangChain-Demo",
api_key=os.getenv("KLIRA_API_KEY"),
enabled=True
)
# Set user context for all decorated functions
set_hierarchy_context(user_id="user_123")
@tool(name="weather_tool", organization_id="demo", project_id="langchain")
def get_weather(location: str) -> str:
"""Get weather information for a location."""
# Mock weather function
return f"Weather in {location}: Sunny, 72°F"
@workflow(name="langchain_agent", organization_id="demo", project_id="langchain")
def run_langchain_agent(query: str) -> str:
"""Run a LangChain agent with monitoring."""
llm = ChatOpenAI(temperature=0)
# Create agent with tools
tools = [get_weather]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": query})
return result["output"]
# Test
response = run_langchain_agent("What's the weather like in San Francisco?")
print(response)

With CrewAI

Terminal window
pip install klira[crewai]
import os
from klira.sdk import Klira
from klira.sdk.decorators import crew, agent, task
from klira.sdk.utils.context import set_hierarchy_context
from crewai import Agent, Task, Crew, Process
# Initialize Klira AI
klira = Klira.init(
app_name="CrewAI-Demo",
api_key=os.getenv("KLIRA_API_KEY"),
enabled=True
)
# Set user context for all decorated functions
set_hierarchy_context(user_id="user_123")
@agent(name="writer_agent", organization_id="demo", project_id="crewai")
def create_writer():
return Agent(
role="Technical Writer",
goal="Write clear and informative content",
backstory="You are an expert technical writer with AI knowledge",
verbose=True
)
@task(name="writing_task", organization_id="demo", project_id="crewai")
def create_writing_task(writer_agent: Agent, topic: str):
return Task(
description=f"Write a brief explanation of {topic}",
agent=writer_agent,
expected_output="A clear, informative paragraph"
)
@crew(name="writing_crew", organization_id="demo", project_id="crewai")
def create_writing_crew(topic: str):
writer = create_writer()
task = create_writing_task(writer, topic)
return Crew(
agents=[writer],
tasks=[task],
process=Process.sequential,
verbose=True
)
# Test
crew = create_writing_crew("machine learning")
result = crew.kickoff()
print(result)

Step 7: Custom Policies (Optional)

Add custom governance policies by creating a policy file:

policies/custom_policies.yaml
version: "1.0.0"
policies:
- id: "demo_policy"
name: "Demo Content Policy"
domains: ["demo", "test", "example"]
description: "Ensures demo content is appropriate"
action: "allow"
guidelines:
- "Keep all demo content professional and appropriate"
- "Avoid controversial topics in examples"
- "Focus on technical learning outcomes"
patterns:
- "(?i)inappropriate.*demo"

Configure Klira AI to use your custom policies:

import os
from klira.sdk import Klira
klira = Klira.init(
app_name="CustomPolicyDemo",
api_key=os.getenv("KLIRA_API_KEY"),
policies_path="./policies",
enabled=True
)

Step 8: View Your Data

  1. Dashboard: Visit hub.getklira.com to see your traces and metrics
  2. Local Logs: Check your application logs for tracing information
  3. Custom OTLP: Configure a custom OpenTelemetry endpoint if needed
# Default configuration (uses https://api.getklira.com automatically)
klira = Klira.init(
app_name="MyApp",
api_key=os.getenv("KLIRA_API_KEY"),
enabled=True
)

Note: To use a custom OpenTelemetry endpoint instead of Klira AI’s default (https://api.getklira.com), set opentelemetry_endpoint="https://your-custom-otlp-endpoint.com". This is only needed if you want to send telemetry data to your own OTLP collector rather than using Klira AI’s built-in telemetry service.

What You’ve Accomplished

In just 5 minutes, you’ve:

Installed Klira AI SDK Instrumented your first function with monitoring Added policy enforcement and guardrails Integrated with real LLM services Tested framework-specific integrations Created custom governance policies

Next Steps

Dive Deeper

  1. First Example - Detailed walkthrough with explanations
  2. Architecture Overview - Understand how Klira AI works
  3. Creating Custom Policies - Build advanced governance rules

Framework-Specific Guides

  1. OpenAI Agents Integration
  2. LangChain Integration
  3. CrewAI Integration

Production Setup

  1. Production Configuration
  2. Performance Tuning
  3. Security Best Practices

Common Issues

API Key Not Working

Terminal window
# Verify your API key is set
echo $KLIRA_API_KEY # Linux/macOS
echo $env:KLIRA_API_KEY # Windows PowerShell

Import Errors

Terminal window
# Ensure Klira AI is installed
pip show klira
# Reinstall if needed
pip install --upgrade klira

Framework Detection Issues

# Check if your framework is detected
from klira.sdk.utils.framework_detection import detect_framework
print(detect_framework()) # Should show your framework

Getting Help

  • Documentation: Browse the full docs in this directory
  • Examples: Check out the examples section
  • Issues: Report bugs on GitHub

Congratulations! You now have a monitored, governed LLM application. The same patterns work across any framework - just swap out the LLM code while keeping the Klira AI decorators.