Troubleshooting Guide

A structured framework and tools to help you diagnose, fix, and learn from problems when building with Databutton.

Table of Contents

  1. Introduction

  2. Common Issues & Symptoms

  3. Troubleshooting Framework: IDEAL

  4. Prompt Quality & Ambiguity Handling

  5. Debugging & Evidence Gathering

  6. Task & Fix Plan Templates

  7. Common Pitfalls & How to Avoid Them

  8. FAQ & Known Issues

  9. Resources & Learning Materials

  10. When to Contact Support


Introduction

Building apps with Databutton is fast and powerful—but sometimes things don’t behave quite how you expect. This guide is here to help you work through those moments methodically: to identify what’s wrong, fix it efficiently, and learn habits that make future troubleshooting easier.

This page is useful if you are:

  • Unsure why the agent isn’t doing what you asked

  • Getting errors in the UI, backend, APIs, or prompts

  • Feeling stuck in a back-and-forth with the agent without progress

Common Issues & Symptoms

If you see one or more of these, it’s probably time to use a structured approach:

  • Agent misunderstands your request / produces unintended behavior

  • Prompt results are vague, contradictory, or not what you expected

  • APIs return errors, or data isn’t flowing correctly

  • UI components render incorrectly, or styling/layout breaks

  • Performance is slow; delays in responses or actions

  • Agent “forgets” early details or requirements (“context loss”)

  • Changes in app break previously working features


Troubleshooting Framework: IDEAL

Here’s a step-by-step structure to guide you through diagnosing and resolving issues. Use this with your agent, or internally, to keep things clear.

Phase
What you do
Key Questions / Checks
What to gather / record

Phase 1: Identify / Observe

Notice what is going wrong. Describe expected vs actual.

What did you ask the agent to do?

What did you expect?

What happened instead?

Can you reproduce the issue?

Is it intermittent or always?

High-level description; screenshots; error messages; when/where it happens.

Phase 2: Define

Clarify the scope, constraints, and possible ambiguities.

Are there ambiguous terms?

Were there assumptions not stated?

Is the request technically possible given the tools & your setup?

Your original prompt; any tools or features you're relying on;

Phase 3: Explore / Gather Evidence

Collect data to understand what’s actually failing.

Which component / page / backend is involved?

Any logs?

What do API requests/responses look like?

What does the console show?

Under what conditions does the issue show up?

Console logs / stack traces; network / API response payloads; code snippets; environment details; steps to reproduce.

Phase 4: Act / Debug

Make a plan for fixing it. Work in small, clear steps. Use tasks.

What fix do you propose?

What substeps are required?

How will you test / verify when it’s fixed?

A task with title & description & Definition of Done; interim checks; test cases; verification criteria.

Phase 5: Learn & Prevent

Reflect on what happened so you can resolve similar issues faster next time.

What was the root cause?

What fixed it? What could have been done earlier?

What checklist or guidelines will I use in future?

Documented root cause and resolution; update your own checklist; learn any patterns.


Prompt Quality & Ambiguity Handling

Good prompts help avoid problems before they start. Use these tips and templates to make your requests clearer.

Ambiguity Test

Use this before sending a large or complex request:

Here is my request: “<your request here>”

Please do the following:
1. Identify any parts that could be interpreted in more than one way.
2. State what assumptions you are making when interpreting this request.
3. Suggest how I could rephrase or add detail so that your understanding aligns with what I want.

Example:

  • Vague prompt: “Create UI for the settings page.”

  • Ambiguities:

    • Which settings (email/password, preferences, notifications, etc.)

    • How layout should work

    • How errors / validation should be handled

  • Refined prompt: “Design a ‘Settings’ page where a user can update email & password. Require password to have at least 8 chars, one uppercase, one number. On error, show inline messages; on success, show confirmation banner. Use #SettingsPage and #AuthAPI for endpoints.”

Best Practices for Prompts

  • Be precise: specify exact UI elements, names, files (#tags), styles, etc.

  • Give context: what this UI / feature will be used for, constraints, account type.

  • Work in small chunks rather than one monolithic prompt.

  • Use examples / before-after where possible.

  • Use the #tag feature to refer to specific UI components, Pages, Backend APIs so the agent knows exactly what you mean.

Debugging & Evidence Gathering

Here are the most useful kinds of information to collect when things are going wrong:

  • Steps to reproduce: exact clicks, UI flow, which page, what data you entered, etc.

  • Expected vs actual behavior: what you thought should happen, vs what you saw.

  • Console logs / stack traces: try to copy full error messages.

  • Network / API data: request & response payloads, status codes, latency.

  • Screenshots / screen recordings: especially for UI issues or styling/layout glitches.

  • Environment info: browser version, OS, any settings that might affect behavior.

  • Whether issue is consistent or intermittent: does it happen always, or only under certain conditions / input.

When you share with the agent / in support, include as much of the above as possible.

Task & Fix Plan Templates

To keep fixes organized and verifiable, use tasks. Here are templates for describing debugging tasks and solution implementation.

Debugging Task Template

**Title**: [Short description of the issue]

**Description**:
- Expected behavior: 
- Actual behavior: 

**Steps to reproduce**:
1. …
2. …
3. …

**Relevant files / components**:
- #ComponentName
- #PageName
- #APIEndpointName

**Logs / Screenshots**: (attach or paste here)

**Additional context**:
- Browser / OS:
- Version of app / agent:

**Acceptance / Definition of Done**:
- Error no longer appears in console
- UI matches design / behaves as expected
- Relevant API responses correct under test cases

Solution Implementation Task Template

**Title**: Implement fix for [Issue Summary]

**Subtasks**:
1. Investigate root cause
2. Write or update code / logic
3. Add validation or error-handling if missing
4. Create or update UI / styling if needed
5. Write test cases / manually test under edge conditions

**Milestones / Check-ins**:
- After code changes: local testing
- After UI changes: visual check & layout verification
- After full fix: end-to-end workflow tested
- After deployment: confirm issue resolved

**Definition of Done**:
- All subtasks are complete
- All test cases pass
- No regressions in related features
- Performance (if relevant) acceptable

Common Pitfalls & How to Avoid Them

Pitfall
How to avoid it

Long / unfocused conversations → agent loses earlier context

Keep threads focused; if conversation is long, summarise earlier requirements and key constraints; or start a new thread.

Unstated assumptions

Explicitly state things like account type, size of data, desired style, environment (browser, OS). Use #tags to reference existing components.

Too much scope in one prompt

Break large tasks into smaller ones; use subtasks; test incrementally.

Insufficient logs / missing error info

Always capture exact error messages, stack traces, API responses. Don’t paraphrase. Screenshots help.

Not clarifying when a task is “done”

Define clear “Definition of Done” in each task so you and the agent know when to stop. Use measurable criteria.

Overlooking environmental issues (cache, browser quirks)

Try simple fixes first (clear cache / hard refresh); include environment info in task & reports.

FAQ & Known Issues

Q: What happens if the agent “forgets” earlier instructions? A: This often happens when the conversation thread becomes large, and earlier key information moves outside the agent’s context window. To mitigate: summarise important specs periodically; use smaller focused threads; or start a new thread with a summary.

Q: How much context is “too much”? A: When the agent starts making mistakes, ignoring earlier instructions, or acting “out of scope”. If that happens, it's a sign you need to prune context, summarise, or restart.

Q: When should I contact support vs solving via docs / prompts? A:

  • Use docs / this guide first to diagnose & gather evidence.

  • Contact support when: you have tried the above framework and still unresolved; there may be a bug in Databutton; or you're uncertain about an account / permission issue.


Resources & Learning Materials

Here are internal & external resources to help you improve your prompt engineering, debugging workflows, and deepen your knowledge.


When to Contact Support

When you reach out for help (on Discord or via Intercom / email), having well-prepared information speeds things up. Here’s a suggested template to include in your support request:

**Subject**: [Brief summary of the issue]

**Description**:
- What I asked / intended to happen:
- What actually happened:

**Steps to Reproduce**:
1. …
2. …
3. …

**Logs / Screenshots / Attachments**: (paste or attach)

**Environment**:
- Browser & version:
- OS:
- Databutton app version if known:

**What I have tried already**:
- E.g. cleared cache; refreshed page; asked agent to re-interpret; small scope prompt; etc.

**Why I believe this may be a bug / gap** (if applicable)

Summary

By using the IDEAL framework, refining prompts, gathering solid evidence, defining tasks clearly, and reflecting afterwards, you'll be able to resolve issues faster, avoid repeating the same problems, and build your apps more confidently. Keep this page bookmarked. You’ll get better at troubleshooting with every problem you solve.

Last updated

Was this helpful?