Skip to content

AI Integration

CutReady’s AI features are powered by the agentive crate — a shared Rust library providing a pluggable LLM backend with streaming responses, function calling, and multiple agent configurations.

AI chat panel with function calling AI chat panel with function calling
The AI assistant uses function calling to read and modify sketches in real-time

The agentive crate defines a Provider trait that abstracts over different LLM services:

#[async_trait]
trait LlmProvider: Send + Sync {
async fn complete(&self, messages: &[Message]) -> Result<String>;
async fn complete_structured<T: DeserializeOwned>(
&self, messages: &[Message], schema: &JsonSchema
) -> Result<T>;
}

This trait enables swapping providers without changing the rest of the application. Currently supported:

ProviderAuth MethodNotes
Microsoft FoundryAPI key or Azure OAuthAzure AI Foundry project endpoints
Azure OpenAIAPI key or Azure OAuthStandard Azure OpenAI resources
OpenAIAPI keyDirect OpenAI API access
AnthropicAPI keyClaude models

The Azure-family providers connect via the Foundry or Azure OpenAI API with:

  • OAuth authentication — Azure AD token flow with Tenant ID and Client ID
  • Streaming — Server-sent events for real-time response streaming
  • Model selection — Configurable model name (e.g., gpt-4o)

API endpoint, credentials, and model are configured in Settings → AI Provider.

The AI assistant uses function calling to interact with the user’s project. Tool definitions are sent as part of the system prompt, and the model invokes them as needed during conversation.

ToolParametersPurpose
list_project_filesoptional image flagList sketches, notes, storyboards, and optionally screenshots
read_note / write_notenote path, contentRead or write markdown notes
read_sketch / write_sketchsketch path, rowsRead a sketch or replace all planning rows
update_planning_rowindex, row dataUpdate a single planning row
read_storyboard / write_storyboardstoryboard path, itemsRead or update storyboard metadata and sequence
design_plansketch path, row index, planSave the Designer agent’s plain-English visual brief
set_row_visualsketch path, row index, visualSave or remove an Elucim DSL visual for a row
delegate_to_agentagent ID, messageDelegate a focused subtask to another agent
fetch_urlURLFetch clean web page text plus a deduplicated links section
search_webquerySearch public web results when Internet Search is enabled
recall_memory / save_memorymemory query/contentReuse workspace memory across sessions
  1. The model returns a tool_call in its response
  2. The Rust runner dispatches the tool call to the project tool executor
  3. The backend executes the operation (file read, write, web fetch, search, or visual save)
  4. The result is returned to the model as a tool response message
  5. The model continues generating its response with the tool result

The frontend receives streamed status, tool-call, tool-result, and delta events over Tauri channels so users can see what the agent is doing.

When the provider exposes model metadata, CutReady records the model’s context length and capability tags such as vision and Responses API support. The model picker shows these tags, vision mode can warn when the selected model lacks image support, and conversation compaction uses the configured context window.

Each agent has a system prompt that defines its personality and behavior:

  • Planner — Focused on demo structure, timing, and logical flow
  • Writer — Focused on engaging narration and natural voiceover language
  • Editor — Focused on precision, making minimal targeted changes
  • Designer — Focused on generating Elucim DSL visuals for sketch rows

System prompts include the tool definitions (JSON schema) so the model knows what tools are available and how to invoke them.

Sparkle button actions use silent mode (silent: true flag) on sendChatPrompt(). In silent mode:

  • The prompt is not displayed in the chat panel
  • The response is not shown as a chat message
  • Tool calls execute normally and update the sketch
  • Actions are logged only in the Activity Panel

This keeps the chat history focused on intentional conversations.

The four built-in agents (Planner, Writer, Editor, Designer) ship with CutReady and have read-only system prompts that are tuned for their respective tasks.

Users can define custom agents in Settings → Agents with:

  • Name — Display name in the agent selector
  • System prompt — Custom instructions that define the agent’s behavior

Custom agents use the same tool set and model as built-in agents, but with a different system prompt that shapes the AI’s approach.

Responses are streamed from the LLM to the frontend via Tauri Channels:

  1. The frontend sends a chat message via a Tauri command
  2. The Rust backend opens a streaming connection to the LLM API
  3. Response chunks are forwarded through a Tauri Channel to the frontend
  4. The frontend renders tokens as they arrive for a responsive feel
  5. Tool calls are detected mid-stream and executed before continuing