Skip to content
synthreo.ai

Models & API Keys

How to configure AI models in Synthreo - set up LLM vendors, deployments, temperature, max tokens, and system prompts in the Tenant Management Models & API Keys section.

The Models & API Keys section in Tenant Management is where you configure the AI models available to your organization in ThreoAI. Each model configuration defines which vendor and deployment to use, how the model behaves, and what system-level instructions it follows.


  1. Log in to Tenant Management
  2. In the left sidebar, click Models & API Keys
  3. You will see a list of your existing model configurations

Navigate to Models & API Keys and Add New Model


  1. Click the + Add New Model button
  2. Fill in the GPT Model Info form (see fields below)
  3. Click Save

GPT Model Info form - configure vendor, deployment, tokens, temperature, and system message


FieldRequiredDescription
TitleYesDisplay name for the model. Use company or project names (e.g., “Banana Inc GPT”) rather than persona names (e.g., “SupportGPT”)
VendorYesAI vendor - e.g., AzureOpenAI
DeploymentYesSpecific model deployment - e.g., gpt-5-chat
OwnerAutoShows your tenant organization
Max TokensNoMaximum response length. Higher values allow longer answers but increase cost and latency
TemperatureNoControls creativity vs. consistency (see below)
ActiveNoCheck to make the model available in ThreoAI. Uncheck to hide it
System MessageNoThe system prompt - base instructions that apply to all Custom GPTs using this model
RangeBehaviorBest For
0 - 0.3Deterministic, factual, repeatableData analysis, compliance, factual Q&A
0.4 - 0.7Balanced creativity and stabilityGeneral business use
0.8 - 1.0Creative, less predictableBrainstorming, content generation

  1. Go to Models & API Keys
  2. Click on the model you want to edit
  3. Modify any fields
  4. Click Save

Changes take effect immediately for new conversations. Existing conversations continue with the previous configuration until the user starts a new chat.


The System Message field is where you define the system prompt for the model. This is the foundational instruction that applies to every Custom GPT built on this model.

For a comprehensive guide on writing effective system prompts, see Using System Prompts in ThreoAI.


  • Name models after your company or project, not personas - personas belong in Custom GPT Instructions
  • Start with a moderate temperature (0.5) and adjust based on output quality
  • Keep system messages concise - long prompts consume tokens and reduce conversation capacity
  • Test before activating - configure and test with a small group before making a model available to all users
  • Use the Active toggle to temporarily disable models during maintenance without deleting them

IssueCauseFix
Model not visible in ThreoAIActive checkbox not checkedEdit the model and enable the Active toggle, then save
Unexpected model behaviorTemperature or system message misconfiguredReview the Temperature and System Message fields; lower temperature for more predictable output
Changes not reflected in existing chatsConfig updates only apply to new conversationsAsk users to start a new conversation to pick up the updated settings
Save button not respondingRequired fields are emptyEnsure Title, Vendor, and Deployment are all filled in

Can I have multiple model configurations for the same vendor? Yes. You can create as many configurations as needed. Each one appears as a separate option in ThreoAI, so users can choose the appropriate model for their task.

What happens if I uncheck Active on a model that users are currently using? The model will no longer appear in ThreoAI for new conversations. Existing open conversations that were already using the model may continue until the user starts a new session.

Can I delete a model configuration? Yes. Deleting a model removes it permanently. If users have Custom GPTs that reference the deleted model, those GPTs will lose their model assignment. Deactivating (unchecking Active) is safer than deleting when you only want to hide the model temporarily.

Where does the System Message fit relative to Custom GPT Instructions? The System Message is injected first, before Custom GPT Instructions and before the user’s prompt. It sets universal rules for all GPTs built on that model. See Using System Prompts in ThreoAI for full details.