Step: Guardrail
- Key:
guardrail - Category: AI & Language Models
- Description: Validate content against guardrail policies for PII detection, prompt shielding, and moderation.
Inputs
content(long-text, required): Content or template to evaluate. Flow variables are supported.policy(guardrail policy, optional): Policy configuration for PII detection, PromptShield, and moderation checks.
Outputs
status(string):passedorblocked.passed(boolean):truewhen all configured checks pass.errors(array): Violations or execution errors, including category, severity, and action metadata when available.
Branching
If the step is configured with a failure branch internally, blocked results can include targetStepId so the Flow can continue on that branch.
Notes
- Place this step before sending user content to LLMs, tools, or external systems.
- Empty content is treated as blocked.
- Use the
errorsarray for audit logs, explanations, or final responses.

