Skip to content

Adding AI Models

v0.3.0

CutReady uses large language models (LLMs) to power its AI assistant — planning sketches, writing narratives, improving descriptions, and managing your demo project. This guide covers how to connect CutReady to a model provider.

ProviderEndpoint FormatAuth Methods
Azure OpenAIhttps://your-resource.openai.azure.comAPI Key, Azure OAuth
Microsoft AI Foundryhttps://your-hub.services.ai.azure.com/api/projects/your-projectAPI Key, Azure OAuth
OpenAIhttps://api.openai.com (default)API Key

AI Foundry provides a unified endpoint for accessing multiple model deployments. CutReady auto-detects Foundry endpoints (.services.ai.azure.com) and adjusts its API calls accordingly.

  • An Azure subscription
  • An AI Foundry hub and project (create one here)
  • At least one model deployed in your project (e.g., gpt-4o)
  1. Get your project endpoint

    In the AI Foundry portal, navigate to your project. Copy the endpoint URL — it looks like:

    https://your-hub.services.ai.azure.com/api/projects/your-project
  2. Open CutReady Settings

    Click the gear icon (⚙️) in the sidebar, then select the AI Provider tab.

  3. Select Azure OpenAI as the provider

    CutReady uses the Azure OpenAI provider for Foundry endpoints — it auto-detects the Foundry format from the URL.

  4. Paste your endpoint

    Enter the full Foundry project endpoint URL in the Endpoint field.

  5. Choose authentication

    • Set Authentication to API Key
    • Paste your Foundry project API key (found in the AI Foundry portal under your project’s Keys and Endpoint section)
  6. Select a model

    Click Refresh next to the model dropdown. CutReady will query your Foundry project for available deployments. Select the model you want to use (e.g., gpt-4o).

CutReady automatically detects Foundry endpoints by checking if the URL contains .services.ai.azure.com. When detected, it:

  • Uses the Foundry-compatible chat completions path
  • Queries the Foundry deployments API for available models
  • Filters to chat-capable models when populating the model dropdown

No special configuration is needed — just paste the Foundry endpoint and CutReady handles the rest.

For standard Azure OpenAI resources (not Foundry):

  1. Get your resource endpoint from the Azure portal (e.g., https://your-resource.openai.azure.com)
  2. Set Provider to Azure OpenAI
  3. Paste the endpoint URL
  4. Choose API Key or Azure OAuth for authentication
  5. Click Refresh to load deployed models and select one

For direct OpenAI API access:

  1. Set Provider to OpenAI
  2. Leave the endpoint blank (defaults to https://api.openai.com)
  3. Paste your OpenAI API key
  4. Click Refresh to load available models
ModelBest ForNotes
gpt-4oBest qualityRecommended for planning and writing
gpt-4o-miniFaster responsesGood for quick edits and iterations
  • Verify your endpoint URL is correct
  • Check that you have at least one model deployed
  • Ensure your API key or OAuth token has permission to list deployments
  • For API Key: verify the key is correct and not expired
  • For OAuth: check that your Tenant ID and Client ID are correct, and that the app registration has the required permissions
  • Check the debug panel (click the bug icon in the title bar) for error details
  • Ensure the selected model supports chat completions
  • Verify your Azure subscription/OpenAI account has available quota