LogoLogo
  • 👋START HERE
    • Welcome!
  • â„šī¸General
    • Release Notes
      • Full Feature Base Template
      • Services
      • Rule-based Automation
        • February 2025
        • January 2025
        • December 2024
        • November 2024
        • October 2024
        • September 2024
        • August 2024
        • July 2024
        • June 2024
        • May 2024
        • April 2024
        • March 2024
        • February 2024
        • January 2024
        • 2023
        • 2022
        • 2021
        • Dialog Design Update
    • Glossary of Terms
    • Authentication Methods
      • SSO (Single Sign-On)
      • Built-In User Management
    • Acceptable Use Policy
  • âš™ī¸Rule-based Automation
    • Overview
      • Account Settings
        • Profile
        • Team
        • Roles and Permissions
          • User Management
          • Project Permissions
      • Basic Concepts
        • Project Management
        • Version Management
        • Multi-Lingual Bots
          • Supported Languages
        • Managing User Interactions
          • Unexpected User Input
          • No User Input
    • Dialog Interface
      • Blocks
        • Conversation Logic
          • Start Conversation
          • Global
          • State
          • Intermediate Response
          • To Previous State
          • End Conversation
        • Subdialog
          • Reusable Subdialogs
        • Phone
          • Continue Listening
          • Call Control
        • Technical Logic
          • Service
          • Condition
          • Storage
        • Other
          • Note
      • Speech Assets
        • Intents
          • Utterances
          • Descriptions
        • Slots
          • Custom Slots
            • List Slots
            • Machine Learning Slots
            • Regex Slots
            • LLM Slots
          • Prebuilt Slots
            • DTMF Slot
        • Text Snippets
        • Dictionary
      • Variables
        • Intents
        • Slots
        • Storage
        • Text Snippets
        • Environments
        • Platform
        • Context
      • Services
        • Service Integration Guide
        • Service Development
        • Service Branches and Error Handling
        • Public Services
          • Date and Birthdate Recognition
          • Spelling Post-Processing for Phone
          • IBAN Validation
          • License Plate Validation
          • Address Search
          • Street Names per Postal Code
          • Email Service
          • SMS Service
          • API Adapter
          • Salesforce-Flow Connector
          • Opening Hours
          • Speech-to-Text Hints
          • Fuzzy Match Names
          • Delay Service
      • Debugger
        • Phone 2
        • WhatsApp
        • Textchat 2
    • Environments Interface
      • Service Keys
    • Deployments Interface
      • Creating a Release
      • Editing a Release
    • Text-to-Speech
      • Azure
      • ElevenLabs
      • OpenAI via Azure (Preview)
      • SSML
        • Audio
        • Break
        • Emphasis
        • Prosody
        • Say-as
        • Substitute
        • Paragraph and Sentence
        • Voice
    • Autocomplete
    • Parloa APIs
      • CallData Service and API
      • Conversation History API
      • Textchat V2 API
    • Phone Integrations
      • Genesys Cloud
        • Setting up the SIP Trunk
        • Sending/Receiving UUI Data
        • Creating a Script to Display UUI
      • SIP
      • Tenios
        • Setting Up an Inbound Connection
        • Setting Up an Outbound Connection
        • Transferring a Call
      • Twilio
      • Public IPs and Port Information
    • AI Integration Overview
      • Dual Intent Recognizer (DIR)
      • Dual Tone Multifrequency (DTMF) Intent
    • Analytics and Debugging
      • Understanding Conversations and Transactions
      • Managing Caller ID Data
      • Hangup Events and Triggered Analytics
      • Analytics Transactions: Data Structure and Insights
      • Dialog Analytics
      • Audit Logs
      • Parloa-hosted Analytics
    • Data Privacy
      • Anonymizing Personally Identifiable Information
    • NLU Training
      • NLU Training Best Practices
    • How To
      • Create a Scalable and Maintainable Bot Architecture
      • Implement OnError Loop Handling
      • Resolve the 'Service Unavailable' Error
    • Reference
      • Parloa Keyboard Shortcuts
      • Frequently Asked Questions (FAQ)
      • JavaScript Cheat Sheet
        • Using Regular Expressions (Regex)
  • 🧠Knowledge Skill
    • Introduction
    • Knowledge Collections
    • Knowledge Sources
    • Knowledge Skill Setup
      • Step 1 – Create a Knowledge Skill Agent
      • Step 2 – Configure a Knowledge Skill Agent
      • Step 3 – Configure a Knowledge Skill Agent
Powered by GitBook
On this page
  • Activation Process
  • Use Case Benefits
  • Enabling LLM Intent Classification on a Block
  • Crafting Effective Intent Descriptions
  • Frequently Asked Questions (FAQ's)

Was this helpful?

Export as PDF
  1. Rule-based Automation
  2. Dialog Interface
  3. Speech Assets
  4. Intents

Descriptions

An Overview of Large Language Models (LLM) Intent Classification and Its Implementation in Parloa

PreviousUtterancesNextSlots

Last updated 5 months ago

Was this helpful?

Parloa’s LLM Intent Classification enhances your bot’s ability to understand user intents. This feature works alongside the traditional Natural Language Understanding (NLU) approach, which depends on a trained speech model. Unlike traditional methods, the LLM method uses natural language descriptions. This simplifies the process of creating intent systems and eliminates the need for separate intent utterances.

This feature is available for Textchat 2 and Phone 2 platforms.

Activation Process

To use LLM Intent Classification, you must activate it, as it is not enabled by default. Follow these steps:

1

Contact Support

  • Email our support team at , or

  • Reach out to your Customer Success Manager.

2

Apply to Your Workflow

After activation, you can add this feature to and blocks in your bot’s workflow.

Using LLM Intent Classification involves additional costs. For pricing details or contract modifications, consult your Customer Success Manager.

LLM-based systems may introduce slight delays in intent recognition.

Use Case Benefits

  • For new bots: Start with LLM classification to avoid creating multiple example utterances.

  • For existing bots: Use LLM classification to reduce dependency on speech models for intent recognition.

Enabling LLM Intent Classification on a Block

1

Select the Intent Detection Method

  1. Click the block you want to update ( or ).

  2. Go to the Intents tab and open the Detect Intent by dropdown menu.

    Note: If LLM is not enabled, the menu will default to Utterances, and the Description option will not be selectable.

  3. To enable LLM, select Description from the dropdown menu.

  4. An icon will appear on the block, confirming your selection.

2

Add Intent Descriptions

You can provide intent descriptions using one of two methods:

Option 1: Via the Intents Tab

Click the edit icon next to the intent. The following displays, enabling you to enter your description:

Option 2: Via Speech Assets

  1. Navigate to Speech Assets -> Intents.

  2. Add intent descriptions directly:

Note: Intent descriptions are mandatory. If missing, an error icon will appear, indicating the intent is non-functional.

Intent descriptions are mandatory. If missing, an error icon will appear, indicating the intent is non-functional:

Crafting Effective Intent Descriptions

  • Write in simple, clear language.

  • Limit descriptions to 500 characters.

  • Aim for a length of 70 to 125 words to ensure accuracy and clarity.

Frequently Asked Questions (FAQ's)

Can I continue using the Utterances method if I enable LLM Intent Classification?

Absolutely. You can continue to use the Utterances method even after signing up for LLM Intent Classification (Descriptions).

What happens if I enable the LLM descriptions but don't add any descriptions to the intents?

In such cases, your bot will trigger the fallback intent.

Am I still able to train my speech model with LLM Intent Classification enabled?

Yes, you can continue to train your speech model as usual. Utterances will remain visible and editable.

Can I turn off the LLM feature or deselect utterances entirely if I choose to?

Yes, you have the flexibility to disable the LLM feature or deselect utterances in certain blocks at any time.

How can I identify which of my blocks are powered by NLU or LLM?

You can easily distinguish them by the icon displayed on the Start Conversation or State block. For example:

Is it possible to enable LLM Intent Classification only for certain State blocks?
Will the caller's experience change if I use LLM for intent classification?

No, the caller experience remains exactly the same when using LLM for intent classification.

Got more questions or want to share your feedback?

Utterances –

Description –

Yes, you can selectively apply LLM Intent Classification to specific blocks.

Please reach out to us at .

âš™ī¸
State
support@parloa.com
support@parloa.com
Start Conversation
State
Start Conversation
State