Quick Start Guide
Quick Start Guide
Get up and running with Klira AI SDK in 5 minutes. This guide will have you monitoring and governing your first LLM application with minimal setup.
Prerequisites
- Python 3.10 or higher
- Basic familiarity with Python and LLM applications
Step 1: Install Klira AI SDK
pip install kliraStep 2: Get Your API Key
- Sign up at hub.getklira.com
- Navigate to your dashboard and copy your API key
- Set it as an environment variable:
# Windows PowerShell$env:KLIRA_API_KEY="your-api-key-here"
# Linux/macOSexport KLIRA_API_KEY="your-api-key-here"Step 3: Your First Monitored Function
Create a file called quickstart.py:
import osfrom klira.sdk import Klirafrom klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Initialize Klira AI SDKklira = Klira.init( app_name="QuickStartApp", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)
# Set user context for all decorated functionsset_hierarchy_context(user_id="user_123")
# Add monitoring and governance to any function@workflow( name="hello_world", organization_id="quickstart_org", project_id="demo_project")@guardrails()def hello_llm(user_input: str) -> str: """A simple function that mimics LLM behavior."""
# Your LLM logic would go here # For this demo, we'll just return a simple response if "weather" in user_input.lower(): return f"I'd be happy to help with weather information! However, I need your location to provide accurate weather data." elif "hello" in user_input.lower(): return "Hello! How can I assist you today?" else: return f"You asked: '{user_input}'. I'm a demo function, but in a real app, this would go to an LLM."
# Test the functionif __name__ == "__main__": # Test cases test_inputs = [ "Hello there!", "What's the weather like?", "Tell me about quantum computing" ]
print("Testing Klira AI SDK integration...") print("=" * 50)
for i, test_input in enumerate(test_inputs, 1): print(f"\n{i}. Input: {test_input}") result = hello_llm(test_input) print(f" Output: {result}")
print("\nSuccess! Your function now has:") print(" Distributed tracing") print(" Performance monitoring") print(" Policy enforcement") print(" Automatic instrumentation")Step 4: Run Your First Example
python quickstart.pyYou should see output like:
Testing Klira AI SDK integration...==================================================
1. Input: Hello there! Output: Hello! How can I assist you today?
2. Input: What's the weather like? Output: I'd be happy to help with weather information! However, I need your location to provide accurate weather data.
3. Input: Tell me about quantum computing Output: You asked: 'Tell me about quantum computing'. I'm a demo function, but in a real app, this would go to an LLM.
Success! Your function now has: Distributed tracing Performance monitoring Policy enforcement Automatic instrumentationStep 5: Add Real LLM Integration
Now let’s integrate with a real LLM. Here’s an example with OpenAI:
import osimport openaifrom klira.sdk import Klirafrom klira.sdk.decorators import workflow, guardrailsfrom klira.sdk.utils.context import set_hierarchy_context
# Initialize Klira AIklira = Klira.init( app_name="OpenAI-Demo", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)
# Set your OpenAI API keyopenai.api_key = os.getenv("OPENAI_API_KEY")
# Set user context for all decorated functionsset_hierarchy_context(user_id="user_123")
@workflow( name="openai_chat", organization_id="my_org", project_id="openai_project")@guardrails()def chat_with_openai(user_message: str) -> str: """Chat with OpenAI with automatic monitoring and governance."""
try: response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": user_message} ], max_tokens=150 ) return response.choices[0].message.content except Exception as e: return f"Error: {str(e)}"
# Test with real LLMif __name__ == "__main__": test_message = "Explain what observability means in AI systems" response = chat_with_openai(test_message) print(f"Question: {test_message}") print(f"Answer: {response}")Step 6: Framework-Specific Examples
With LangChain
pip install klira[langchain]import osfrom klira.sdk import Klirafrom klira.sdk.decorators import workflow, toolfrom klira.sdk.utils.context import set_hierarchy_contextfrom langchain_openai import ChatOpenAIfrom langchain.agents import AgentExecutor, create_openai_tools_agentfrom langchain.prompts import ChatPromptTemplate
# Initialize Klira AIklira = Klira.init( app_name="LangChain-Demo", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)
# Set user context for all decorated functionsset_hierarchy_context(user_id="user_123")
@tool(name="weather_tool", organization_id="demo", project_id="langchain")def get_weather(location: str) -> str: """Get weather information for a location.""" # Mock weather function return f"Weather in {location}: Sunny, 72°F"
@workflow(name="langchain_agent", organization_id="demo", project_id="langchain")def run_langchain_agent(query: str) -> str: """Run a LangChain agent with monitoring."""
llm = ChatOpenAI(temperature=0)
# Create agent with tools tools = [get_weather] prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), ("placeholder", "{chat_history}"), ("human", "{input}"), ("placeholder", "{agent_scratchpad}"), ])
agent = create_openai_tools_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": query}) return result["output"]
# Testresponse = run_langchain_agent("What's the weather like in San Francisco?")print(response)With CrewAI
pip install klira[crewai]import osfrom klira.sdk import Klirafrom klira.sdk.decorators import crew, agent, taskfrom klira.sdk.utils.context import set_hierarchy_contextfrom crewai import Agent, Task, Crew, Process
# Initialize Klira AIklira = Klira.init( app_name="CrewAI-Demo", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)
# Set user context for all decorated functionsset_hierarchy_context(user_id="user_123")
@agent(name="writer_agent", organization_id="demo", project_id="crewai")def create_writer(): return Agent( role="Technical Writer", goal="Write clear and informative content", backstory="You are an expert technical writer with AI knowledge", verbose=True )
@task(name="writing_task", organization_id="demo", project_id="crewai")def create_writing_task(writer_agent: Agent, topic: str): return Task( description=f"Write a brief explanation of {topic}", agent=writer_agent, expected_output="A clear, informative paragraph" )
@crew(name="writing_crew", organization_id="demo", project_id="crewai")def create_writing_crew(topic: str): writer = create_writer() task = create_writing_task(writer, topic)
return Crew( agents=[writer], tasks=[task], process=Process.sequential, verbose=True )
# Testcrew = create_writing_crew("machine learning")result = crew.kickoff()print(result)Step 7: Custom Policies (Optional)
Add custom governance policies by creating a policy file:
version: "1.0.0"policies: - id: "demo_policy" name: "Demo Content Policy" domains: ["demo", "test", "example"] description: "Ensures demo content is appropriate" action: "allow" guidelines: - "Keep all demo content professional and appropriate" - "Avoid controversial topics in examples" - "Focus on technical learning outcomes" patterns: - "(?i)inappropriate.*demo"Configure Klira AI to use your custom policies:
import osfrom klira.sdk import Klira
klira = Klira.init( app_name="CustomPolicyDemo", api_key=os.getenv("KLIRA_API_KEY"), policies_path="./policies", enabled=True)Step 8: View Your Data
- Dashboard: Visit hub.getklira.com to see your traces and metrics
- Local Logs: Check your application logs for tracing information
- Custom OTLP: Configure a custom OpenTelemetry endpoint if needed
# Default configuration (uses https://api.getklira.com automatically)klira = Klira.init( app_name="MyApp", api_key=os.getenv("KLIRA_API_KEY"), enabled=True)Note: To use a custom OpenTelemetry endpoint instead of Klira AI’s default (https://api.getklira.com), set
opentelemetry_endpoint="https://your-custom-otlp-endpoint.com". This is only needed if you want to send telemetry data to your own OTLP collector rather than using Klira AI’s built-in telemetry service.
What You’ve Accomplished
In just 5 minutes, you’ve:
Installed Klira AI SDK Instrumented your first function with monitoring Added policy enforcement and guardrails Integrated with real LLM services Tested framework-specific integrations Created custom governance policies
Next Steps
Dive Deeper
- First Example - Detailed walkthrough with explanations
- Architecture Overview - Understand how Klira AI works
- Creating Custom Policies - Build advanced governance rules
Framework-Specific Guides
Production Setup
Common Issues
API Key Not Working
# Verify your API key is setecho $KLIRA_API_KEY # Linux/macOSecho $env:KLIRA_API_KEY # Windows PowerShellImport Errors
# Ensure Klira AI is installedpip show klira
# Reinstall if neededpip install --upgrade kliraFramework Detection Issues
# Check if your framework is detectedfrom klira.sdk.utils.framework_detection import detect_frameworkprint(detect_framework()) # Should show your frameworkGetting Help
- Documentation: Browse the full docs in this directory
- Examples: Check out the examples section
- Issues: Report bugs on GitHub
- Support: Email ricardo@getklira.com
Congratulations! You now have a monitored, governed LLM application. The same patterns work across any framework - just swap out the LLM code while keeping the Klira AI decorators.