Prompting the Backend

The initial prompt is key while building any backend FastAPI router

Structuring the prompt - an input and output model is must for the backend

While building the backend β€” we have noticed that defining a well defined argument model works best. Thus it becomes super essential to clearly specify the inputs for a FastAPI router and its expected outputs.

I would like to build an OpenAI LLM powered backend. Input would be user query and output would be the LLM responseWill Databutton store the API keys or install the packages while building the backend?

Yes, Databutton takes care of the all these additional ancillaries while building the backend β€” Secret Storage, Package Management etc

The initial prompt with the a short description of the LLM was useful for the Agent to plan further, in this case ask the end-user for the related API key ( OpenAI API key ).

On receiving the API key, the agent proceeds to write and, when necessary, debug the code, ultimately building the FastAPI endpoint.

from pydantic import BaseModel
from databutton_app import router
import databutton as db
from openai import OpenAI

class LLMRequest(BaseModel):
    user_query: str

class LLMResponse(BaseModel):
    llm_response: str

@router.post("/llm-query")
def llm_query(body: LLMRequest) -> LLMResponse:
    OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY")
    client = OpenAI(api_key=OPENAI_API_KEY)
    completion = client.chat.completions.create(
        model="gpt-4-0125-preview",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": body.user_query}
        ]
    )
    llm_response = completion.choices[0].message.content
    return LLMResponse(llm_response=llm_response)

Is Databutton aware of my Python Package in use ?

Databutton is trained with the most common AI stacks. For instance, OpenAI, LangChain, CohereAI etc. If you have specific suggestions, please let us know - we can easily include them

Similarly, in this case of building an endpoint with OpenAI's LLM, Databutton is versatile with all the relevant SDKs for such backend creation.

It uses databutton’s Python package to fetch the API key from the storage. Uses the latest OpenAI functionalities. All such package handling is also done by Databutton autonomously

import databutton as db
from openai import OpenAI

...

OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY") # Databutton SDK 
client = OpenAI(api_key=OPENAI_API_KEY) # Using latest OpenAI SDK
...

If Databutton is not aware of my Python package , how to build the backend in then?

Passing documentation or contexts to Databutton usually helps. Infact, chunking down the document section wise and asking Databutton to adapt to it works like a charm!

Testing the generated endpoint

Databutton ensures the testing of the generated endpoint. If any bugs are found, the β€œDebugging Tool” starts analysing the error logs to debug them.

How to monitor any error ?

The console is the best place to monitor any informations related to the backed. Using print statement can help to dump output as well. For example, print(llm_response)

If any error persist constantly and hard to debug , always feel free to reach us via the intercom bubble!

Last updated