How to Build AI Agents in Databutton
Creating and Implementing AI Agents Using Databutton and Phidata
AI Agents extend the capabilities of Large Language Models (LLMs) by performing specific tasks. In Databutton, you can achieve this simply through prompting. Here’s an outline on how to create one:
Create an API ( Python backend ) in Databutton : You can do that manually by clicking the "New API" option. Alternatively ask Databutton to create one
Choose a Suitable LLM Orchestration Tool: There are many tools available, such as Phidata, crewAI, LangChain, or Llamaindex. Choose the one that fits your needs.
Prompt Databutton about the Agent you want to create: Provide a clear description of the agent or pass the documentation URL of the Python package you’re using. Ensure your prompt includes “Research about...” to direct Databutton
Databutton's Real-Time Web Search: Databutton will search and conduct research on available sources to generate a functional API
Walkthrough with Phidata Assistant
Prerequisite
It's good to have a foundational knowledge about Phidata and the tools it supports for building any Assistant. These tools are essential for creating AI Agents.
Prompt used for triggering the Phidata
Assistant
. Example prompt ,
Can you research about Phidata and create a simple yfinance Assistant for me?
Databutton performs real-time web searches to generate the relevant code required for creating a Phidata
Assistant
with the necessarytools

If necessary, instruct Databutton to "Proceed" with additional tasks such as package installation and testing the API
API generation
from fastapi import APIRouter
import databutton as db
from phi.assistant import Assistant
from phi.tools.yfinance import YFinanceTools
from phi.llm.openai import OpenAIChat
from pydantic import BaseModel
from fastapi.responses import JSONResponse
# Router for endpoints
router = APIRouter()
# Retrieve the OpenAI API key from secrets storage
openai_api_key = db.secrets.get("OPENAI_API_KEY")
# Initialize the assistant
assistant = Assistant(
name="Finance Assistant",
llm=OpenAIChat(model="gpt-4-turbo", api_key=openai_api_key),
tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, stock_fundamentals=True)],
show_tool_calls=True,
description="You are an investment analyst that researches stock prices, analyst recommendations, and stock fundamentals.",
instructions=["Format your response using markdown and use tables to display data where possible."],
)
class StockInfoResponse(BaseModel):
response: str
@router.get("/stock-info/{ticker}", response_model=StockInfoResponse)
def get_stock_info(ticker: str):
try:
print(f"Fetching stock info for ticker: {ticker}")
response_generator = assistant.run(f"Share the {ticker} stock price and analyst recommendations", markdown=True)
response = "".join([chunk for chunk in response_generator])
print(f"Assistant's detailed response: {response}")
print(f"Response received: {response}")
return StockInfoResponse(response=response)
except ValueError as ve:
print(f"ValueError: {ve}")
return JSONResponse(status_code=400, content={"message": "Invalid input provided."})
except ConnectionError as ce:
print(f"ConnectionError: {ce}")
return JSONResponse(
status_code=503,
content={"message": "Service unavailable. Please try again later."},
)
except Exception as e:
print(f"Error: {e}")
return JSONResponse(status_code=500, content={"message": "Internal Server Error"})
The above code can be further edited or modified.
Last updated
Was this helpful?