Using Streamlit
Streamlit is a Python framework for building interactive web applications. This guide shows how to integrate Azerion Intelligence with Streamlit to create AI-powered applications.
What is Streamlit?
Streamlit allows you to create web applications using simple Python scripts, with automatic UI generation and real-time updates.
Prerequisites
- Python 3.7+ installed
- Your Azerion Intelligence API key from https://app.azerion.ai/account#api-tokens
- Streamlit installation:
pip install streamlit openai
Integration Steps
1. Install Required Libraries
pip install streamlit openai
2. Configure API Access
Create .streamlit/secrets.toml
file:
AZERION_API_KEY = "your_azerion_api_key_here"
3. Initialize Azerion Intelligence Client
import streamlit as st
from openai import OpenAI
@st.cache_resource
def init_ai_client():
return OpenAI(
api_key=st.secrets["AZERION_API_KEY"],
base_url="https://api.azerion.ai/v1"
)
client = init_ai_client()
Basic Example: Simple Chatbot
Create app.py
with this chatbot implementation:
import streamlit as st
from openai import OpenAI
# Initialize Azerion Intelligence client
@st.cache_resource
def init_ai_client():
return OpenAI(
api_key=st.secrets["AZERION_API_KEY"],
base_url="https://api.azerion.ai/v1"
)
client = init_ai_client()
# Initialize chat history
if 'messages' not in st.session_state:
st.session_state.messages = []
# App title
st.title("AI Assistant")
# Chat interface
def get_ai_response(user_input):
response = client.chat.completions.create(
model="meta.llama3-3-70b-instruct-v1:0",
messages=[
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": user_input}
],
temperature=0.7,
max_tokens=1024
)
return response.choices[0].message.content
# Chat input
if prompt := st.chat_input("Ask me anything..."):
# Add user message
st.session_state.messages.append({"role": "user", "content": prompt})
# Get AI response
with st.spinner("Thinking..."):
response = get_ai_response(prompt)
# Add AI response
st.session_state.messages.append({"role": "assistant", "content": response})
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
Run the application:
streamlit run app.py
Troubleshooting
API Key Issues
- Ensure your API key is correctly set in
.streamlit/secrets.toml
- Verify the API key is valid and has proper permissions
Connection Errors
- Check your internet connection
- Verify the base URL:
https://api.azerion.ai/v1
Model Not Found
- Ensure you're using the correct model name:
meta.llama3-3-70b-instruct-v1:0
- Check if your account has access to the requested model
Rate Limiting
- Implement error handling for API rate limits
- Add retry logic with exponential backoff
import time
def get_ai_response_with_retry(user_input, max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="meta.llama3-3-70b-instruct-v1:0",
messages=[
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": user_input}
]
)
return response.choices[0].message.content
except Exception as e:
if attempt < max_retries - 1:
time.sleep(2 ** attempt) # Exponential backoff
continue
st.error(f"AI service error: {str(e)}")
return None