Databutton
DiscordSign up
  • Getting Started
    • Databutton University
    • Meet the Databutton AI agent
  • Help & FAQ
  • Prompting in Databutton
    • Top prompting strategies
    • Prompting app UI & design
    • Prompting up your backend (APIs)
    • Troubleshooting prompt
    • Connecting UI with the Backend
    • Prompt Gallery
  • Using Tasks
    • How to use tasks in Databutton
    • Writing a good task
    • Chaining tasks
  • Task Gallery
  • Integrating SaaS Services
    • Authentication Integration
      • Firebase Integration
      • Supabase Integration
    • Working with Firestore Database
    • Working with Supabase Table
    • Stripe Integration
  • App Configurations
    • Package Installations
    • Managing Secrets
    • Customising Agent Behavior
    • Invite Collaborators
    • Customising App Design
  • Tutorials
    • Configuring a custom domain for your app
  • Databutton MCP – Build tools for AI
  • Troubleshooting
    • Browser window crashes
Powered by GitBook
On this page
  • Structuring the prompt - an input and output model is must for the API
  • Is Databutton aware of my Python Package in use ?
  • Can I build an API with a package that is beyond the LLM's training data?
  • Testing the generated API
  • How to monitor errors?

Was this helpful?

  1. Prompting in Databutton

Prompting up your backend (APIs)

The initial prompt is key while building any backend. A backend in Databutton consists of Python APIs.

PreviousPrompting app UI & designNextTroubleshooting prompt

Last updated 10 months ago

Was this helpful?

Structuring the prompt - an input and output model is must for the API

While building the API — we have noticed that defining a well defined argument model works best. Thus it becomes super essential to clearly specify the inputs and outputs for the API.

I would like to build an OpenAI LLM powered API. Input would be user query and output would be the LLM response. 

The Databutton agent will store the necessary packages it needs to do its job, and will ask you for the necessary api keys (like openai keys). Your API keys is stored as a secret in Databutton, leveraging google's secret store behind the scenes.

The initial prompt with the a short description of the LLM was useful for the Agent to plan further, in this case it asks for the an API key ( OpenAI API key ).

On receiving the API key, the agent proceeds to write and, when necessary, debug the code, ultimately building the FastAPI endpoint.

from pydantic import BaseModel
from databutton_app import router
import databutton as db
from openai import OpenAI

class LLMRequest(BaseModel):
    user_query: str

class LLMResponse(BaseModel):
    llm_response: str

@router.post("/llm-query")
def llm_query(body: LLMRequest) -> LLMResponse:
    OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY")
    client = OpenAI(api_key=OPENAI_API_KEY)
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": body.user_query}
        ]
    )
    llm_response = completion.choices[0].message.content
    return LLMResponse(llm_response=llm_response)

Is Databutton aware of my Python Package in use ?

Databutton is trained with the most common AI stacks. For instance, OpenAI, LangChain, CohereAI etc. If you have specific suggestions, please let us know - we can easily include them.

It uses databutton’s own SDK to fetch the API key from the storage.

import databutton as db
from openai import OpenAI

...

OPENAI_API_KEY = db.secrets.get("OPENAI_API_KEY") # Databutton SDK 
client = OpenAI(api_key=OPENAI_API_KEY) # Using latest OpenAI SDK
...

Can I build an API with a package that is beyond the LLM's training data?

Databutton has access to internet and can perform real-time web searches and conduct research on the relevant results!

You can trigger this functionality by writing a prompt with "Research about it ...".

Passing urls of docs, works pretty well with Databutton inorder to gather relevant informations.

Testing the generated API

Databutton ensures the testing of the generated API. If any bugs are found, the Databutton's “Debugging Tool” starts analysing the error logs to debug them.

How to monitor errors?

The console is the best place to monitor any informations related to the API. Using print statement can help to dump output as well. For example, print(llm_response)

If any error persist constantly and hard to debug , always feel free to reach us via the intercom bubble!

Initial prompt to the Backend Agent. The key parts - an input and a output for the API router. Also an additional context of the LLM to use for this endpoint i,e. "OpenAI"
A text input box for the secret API key is asked from the agent’s side.
You can double check the passed API key manually on hovering to. the config tab.
Databutton searches the internet with the user prompt.
The server log snapshot shows a POST request made to the /llm-query endpoint (which Databutton just generated, code above). The request started at 10:52:47 and by 10:53:02, it was completed with a status code of 200, indicating a successful interaction!