

Botpress
Or: Making an AI builder platform look as smart as it works

My Role
Product Designer
Team
Sabri Helal (PM)
Paul Chevilley (Eng)
Timeline
Jan – Mar 2026
Tools
Figma
Google AI Studio
User Problem
Through support tickets, bug reports, and direct customer feedback in our discord server, a consistent pattern emerged: the Autonomous Node wasn’t guiding builders toward the mental model needed to use it successfully.
Tool calling felt unreliable and unpredictable. Builders struggled to understand when Actions would trigger, how tool outputs affected responses, or how to reliably guide the model’s behavior, leading many to treat the system as inconsistent even with identical configuration.

Transitions felt unreliable and difficult to reason about. Builders struggled to control when Autonomous Nodes would exit, move to the next card, or continue looping, and many weren’t confident in why transitions triggered sometimes but failed in seemingly identical situations.

Other common confusions users included:
Standard and Autonomous Nodes looked too similar, making it unclear when users were working with deterministic flows versus LLM-driven behavior. This also made them frequently believe cards were missing.
Confusion behind language. People often wanted to write custom code, but were confused as it was referred to as actions
Multiple instruction fields (“Instructions,” “Improve Response,” workflow return prompts, etc.) created uncertainty around which inputs actually shaped model behavior.

Constraints
Collaboration
Scoping

Solution
Instructions interface

Surfacing exit conditions

Sidebar redesign

Feedback & Refinement
