Skip to main content
Parlant

Build Chat Agents You Can Trust

Reliable behavior steering and compliance controls for LLM agents



– Loved by control freaks throughout the industry –

Infosys
Slice Bank
Indian Bank
Greenko Group
Tencent
Bytedance
Prime Communications
Honeywell
Infosys
Slice Bank
Indian Bank
Greenko Group
Tencent
Bytedance
Prime Communications
Honeywell


Govern every interaction.

Parlant's Conversational AI server sits between your frontend and your LLM provider, managing the entire lifecycle of every interaction, keeping it compliant with your rules, on-track, and auditable.

Your BackendFrontendMessagesSessionsRetrieversCustomersGuidelinesToolsJourneysGlossaryResponsesKnowledge-BasesDatabasesAPIsOrchestrationLLM ProvidersOpenAI, etc.


Fluid where it should be, predictable where it matters.

Get the engagement of modern LLMs with the reliability your use case demands.

Traditional Chatbots

  • Flows are rigid, scripted turn by turn
  • Unexpected input breaks everything
  • Business-compliant, but not engaging
  • 100% predictable, 0% flexible

Parlant

  • Rules are enforced, every decision is traced
  • Interactions are guided, not scripted
  • Strict output control where it's needed

Freeform LLM Agents

  • Behavior is inconsistent by design
  • Users can easily derail it
  • Engages users, but carries risks
  • 0% predictable, 100% flexible
Too Robotic
Parlant
Too Unpredictable


Designed for enterprise control.

Leverage robust control primitives for customer-facing AI that's consistent at scale

Define behavioral rules that trigger based on conversational context. When multiple guidelines apply, Parlant merges them intelligently in-context—no rigid or manual routing required.

# Detect when the agent is discussing high-risk products
discussing_high_risk_products = await agent.create_observation(
"discussing options, crypto, or leveraged ETFs"
)

risk_disclosure = await agent.create_guideline(
condition="customer expresses clear interest in buying high-risk products",
action="provide risk disclosure and verify customer understands potential losses",
tools=[get_high_risk_product_disclosure],
criticality=p.Criticality.HIGH # Pay extra attention to this guideline
)

# Only consider the risk disclosure when the high-risk observation holds
await risk_disclosure.depend_on(discussing_high_risk_products)


Built for conversations that matter.

Enabling LLM interactions where control and compliance are essential

Financial Services

Financial Services

Tailor recommendations to each customer's risk tolerance and portfolio, enforce suitability disclosures, and keep full audit trails for regulatory review.

Healthcare

Healthcare

Enforce clinical protocols during triage, prevent unauthorized medical advice, and ensure proper handoffs to licensed professionals.

Customer Support

Customer Support

Guarantee policy-accurate responses on refunds and warranties, maintain brand voice, and follow escalation paths consistently.

Legal

Legal

Prevent unauthorized legal advice, enforce required disclaimers, and maintain documentation trails for every client interaction.



Open and growing.

0
GitHub Forks
0
GitHub Stars
0
Discord Members
"

Parlant isn't just a framework. It's a high-level software that solves the conversational modeling problem head-on. Thank you for building it.

Sarthak Dalabehera
Principal Engineer, Slice Bank
"

We tested Parlant extensively. The failure patterns that exist in our production logs — capturing them in Parlant takes just a few minutes.

Vishal Ahuja
Senior Vice President, Applied AI, Chase
"

We went live with a fully functional agent in one week. I'm particularly impressed by how consistent Parlant is with its responses.

Arpit Parashar
Deputy Manager, Greenko Group


Frequently asked questions

The LLM still handles general conversation naturally. Guidelines and journeys define behavioral expectations for specific situations—everything else works as you'd expect from an LLM. If you don't need special handling, you don't need to define any rules.

When you write prompts, you're relying on the LLM to juggle all your instructions at once, and they often can't. The more rules you add, the less reliable they become. Parlant takes a different approach: you declare guidelines, and Parlant orchestrates what the model sees at each turn. Only relevant rules are included in each request, keeping the model focused rather than overwhelmed. It's the orchestration that makes them far more consistent.

The real rigidity of traditional chatbots comes from tree-based flows forcing conversations through predefined branches, not from controlled wording. Even when using the optional strict canned responses mode—which lets you control exact wording when needed—the agent still chooses when to use them based on the fluid nature of the interaction, just like call center reps do. The flow stays flexible, but you get precise wording where it matters.

Use canned responses with strict composition mode for control of wording. In addition to this, the key often lies in Parlant's response selection mechanism: since each response can reference fields (coming from tool results, retrievers, or guidelines), if a required field isn't present in the current context, Parlant automatically disqualifies that response from being selected. This means that, with proper agent design, the agent can't claim something happened when it hasn't, or vice versa. The guardrails are thus structural and 100% deterministic, not just prompt-based, leading to complete reliability even at a large scale.

Parlant is LLM-agnostic, so you can use any provider and model—and many teams do. That said, when consistency and reliability matter, some models perform better than others. The officially recommended providers are Emcie, OpenAI (either directly via OpenAI or via a cloud provider, like Azure), and Anthropic (also via AWS Bedrock or others).



Get started

pip install parlant