Peer Settings
This guide maps the Peer settings UI to each field and explains what they do. It follows the tabs as seen in the app: Basic, Prompt, Search, Features, and Advanced. It also explains how Peers use Memory.
Basic
- Name: Display name for your Peer.
- Model: Base model for text reasoning and responses.
- Short Description: Summary shown in lists and overviews.
- Language: Default language your Peer uses in conversations.
- Role: High-level role context for the Peer (who it is).
- Goal: Outcome-focused objective (what it should optimize for).
- Peer ID: Read-only identifier (useful for API and integrations).
Prompt
- Initial Prompts: Up to 4 short instruction lines added early to the system context.
- Temperature: Creativity slider (0–100). Higher = more diverse output.
- Additional Prompt: Longer guiding instructions appended to system context.
- Content Type: Preferred response style preset from workspace preferences.
Search & Knowledge Base
- Knowledgebase Language: Preferred language of indexed KB content.
- Text Search Weight: 0–1 weight for keyword scoring.
- Vector Search Weight: 0–1 weight for semantic scoring.
- Search Threshold: Minimum relevance score to include results.
- Search Limit: Max number of results to retrieve.
Features
- Enable RAG: Allow retrieval from knowledge sources during answers.
- Include Datasources as Tools: Expose each connected datasource as an individual agent tool for targeted retrieval, instead of searching all sources together.
- Disable Tools: Prevent tool actions (Flows, HTTP, etc.) during reasoning.
- Disable Document: Hide/disable document upload in the chat UI.
- Canvas: Enable rich canvas for creating/editing documents and code. Configure which artifact types are allowed (documents, slides, diagrams, code, canvas).
- Media Model: Optional model for image or media understanding tasks.
- Web Search: Enable built-in web search capability for the Peer.
- Voice: Enable real-time voice conversation mode (speech-to-text → LLM → text-to-speech pipeline).
Advanced
Reasoning Level
- Reasoning: Set the model's reasoning depth.
Lowis fastest and cheapest;Highuses extended chain-of-thought reasoning for complex problems. Requires a model that supports reasoning mode (e.g., o3, o4-mini, Claude 3.7 Sonnet).
Topic Detection & Off-Topic Handling (Beta)
- Topic Detection: Enable automatic topic analysis from recent messages.
- Continue on Off-Topic: If off-topic, still answer normally (on), or switch to special handling (off).
- Off-Topic Response Mode: Static (fixed message) or AI (dynamic explanation and redirect).
- Off-Topic Message: Message used when mode=Static.
Response Controls
- Response Length: Short (1–2 sentences), Medium (1–2 paragraphs), Long (multi-paragraph).
- Response Format: Conversational, Structured (headings/lists), or Technical.
Tool Usage & Context
- Tool Usage Intensity: Low / Medium / High profile for max tool calls.
- Include Recent Tool Logs in Context: Inject recent tool runs into system prompt.
- Tool Log Context Limit: How many recent logs to include (1–25).
Planning & History
- Planning: Enable multi-step planning and coordinated tool orchestration before executing actions.
- Messages Count: How many recent messages to include from history.
Flow-Based Peer
- Flow-Based: When enabled, the Peer delegates every conversation turn to a specific Flow instead of using the standard agent pipeline.
- Flow: Select the Flow that will handle messages. The selected Flow is triggered on every incoming message.
Use Flow-Based mode to build Peers with fully custom conversation logic, branching, multi-step processing, or integrations that the standard agent cannot handle.
Semantic Cache
- Cache Enabled: Enable semantic caching. When a user sends a message semantically similar to a previously cached query, the cached response is returned instantly without invoking the LLM — saving time and credits.
- Cache Settings: Configure the similarity threshold (how close a new query must be to a cached one to be served from cache) and the vector index used for cache lookup.
Session Management
- Session End: Enable automatic session termination based on conditions.
- Session End Conditions: Define conditions that signal the end of a conversation (e.g., user says "goodbye", task is marked complete). When a condition is matched, the session closes automatically.
Guardrails
- Guardrails: Attach content safety policies to the Peer. Each guardrail inspects either the user's message or the AI's response and can block, warn, or mask content.
Three policy types are available:
| Type | What It Does |
|---|---|
| PII Detection | Identifies and optionally masks sensitive personal data (emails, phone numbers, credit cards, IBANs, national IDs, passports, etc.) |
| Prompt Injection Shield | Detects and blocks prompt injection attacks in user messages |
| Moderation | Flags content that violates hate speech, violence, or other moderation categories |
Each guardrail specifies:
- Target: Apply to user message, AI response, or both.
- Action: Block (reject the message), Warn (add a warning), or Mask (replace detected content).
Memory
Peers can store and retrieve user-specific information to personalize conversations.
User Memory
- Enable User Memory: Toggle to allow the Peer to automatically generate and store memories from conversations. Memories are extracted asynchronously after each session.
- User Memory Prompt: Custom instructions that guide how the Peer decides what to remember about users.
- Shared Memory: When enabled, memories are shared across all users interacting with this Peer (useful for team-shared context). When disabled (default), memories are scoped per user.
Memory entries can also be manually managed through Flow steps:
- Save: Use the "Save User Memory" Flow step to persist a key/value with a description.
- Retrieve: Use "Search User Memory" to look up previously saved items.
- Delete: Use "Delete User Memory" to remove records when they are no longer needed.
Quick links:
- Save User Memory: ../flow/steps/save-user-memory
- Search User Memory: ../flow/steps/search-user-memory
- Delete User Memory: ../flow/steps/delete-user-memory
Notes
- Enabling User Memory will automatically extract and persist relevant information from conversations after each session.
- Respect workspace permissions and data policies when storing personal data.
- "Messages Count" in Advanced controls recent chat history, not long-term memory.
Common patterns
- Greet users by name after saving profile details once.
- Store preferences (language, tone) and reuse them across sessions.
- Keep task or ticket IDs to track ongoing requests.
Tips
- Keep Initial Prompts brief; move long guidance to Additional Prompt.
- Balance Text vs Vector weights based on your content quality and query types.
- Start with Tool Usage = Low and increase as you build confidence.
- Use Static Off-Topic messages for consistent brand voice; AI mode for flexibility.

