Prompting up your backend (APIs)
The initial prompt is key while building any backend. A backend in Databutton consists of Python APIs.
Structuring the prompt - an input and output model is must for the API
While building the API — we have noticed that defining a well defined argument model works best. Thus it becomes super essential to clearly specify the inputs and outputs for the API.

I would like to build an OpenAI LLM powered API. Input would be user query and output would be the LLM response.
The Databutton agent will store the necessary packages it needs to do its job, and will ask you for the necessary api keys (like openai keys). Your API keys is stored as a secret in Databutton, leveraging google's secret store behind the scenes.
The initial prompt with the a short description of the LLM was useful for the Agent to plan further, in this case it asks for the an API key ( OpenAI API key ).


On receiving the API key, the agent proceeds to write and, when necessary, debug the code, ultimately building the FastAPI endpoint.
from pydantic import BaseModel
from databutton_app import router
import databutton as db
from openai import OpenAI
class LLMRequest(BaseModel):
user_query: str
class LLMResponse(BaseModel):
llm_response: str
@router.post("/llm-query")
def llm_query(body: LLMRequest) -> LLMResponse:
OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_API_KEY)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": body.user_query}
]
)
llm_response = completion.choices[0].message.content
return LLMResponse(llm_response=llm_response)
Is Databutton aware of my Python Package in use ?
Databutton is trained with the most common AI stacks. For instance, OpenAI, LangChain, CohereAI etc. If you have specific suggestions, please let us know - we can easily include them.
It uses databutton’s own SDK to fetch the API key from the storage.
import databutton as db
from openai import OpenAI
...
OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY") # Databutton SDK
client = OpenAI(api_key=OPENAI_API_KEY) # Using latest OpenAI SDK
...
Can I build an API with a package that is beyond the LLM's training data?
Databutton has access to internet and can perform real-time web searches and conduct research on the relevant results!
You can trigger this functionality by writing a prompt with "Research about it ...".

Testing the generated API
Databutton ensures the testing of the generated API. If any bugs are found, the Databutton's “Debugging Tool” starts analysing the error logs to debug them.
How to monitor errors?
The console is the best place to monitor any informations related to the API.
Using print
statement can help to dump output as well. For example, print(llm_response)

/llm-query
endpoint (which Databutton just generated, code above). The request started at 10:52:47 and by 10:53:02, it was completed with a status code of 200, indicating a successful interaction! If any error persist constantly and hard to debug , always feel free to reach us via the intercom bubble!
Last updated
Was this helpful?