Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Stay Updated with our Release Notes
Your gateway to the latest enhancements, bug resolutions, and innovative features. Each release is systematically cataloged by month and year for easy reference.
At Parloa, we specialize in utilizing advanced AI technologies to enhance customer service experiences within contact centers. Our platform integrates generative AI, natural language understanding, and other AI components to create realistic, human-like interactions across various communication channels such as phone, chat, and messenger services.
The advantages of our approach include:
Automating contact centers: By reducing wait times and motivating agents, we deliver superior customer service experiences.
Augmenting agent capabilities with AI: We provide real-time translations and fast responses, which streamline communication and efficiency.
Integrating with existing business infrastructure: Our platform is designed to work seamlessly with your company's existing systems, making it versatile and adaptable to different business needs.
Ensuring enterprise readiness: Our platform is robust enough to handle the complexities of large enterprise environments, ensuring reliability and scalability.
Our documentation is divided into the following product offerings:
To stay up to date with the latest updates and status alerts from Parloa, fill out the form below:
Fix the issue where project page pagination incorrectly reflects the displayed page.
A new custom STT model "Hints" is added!
Ability to remove a user from a tenant via the Settings tab with Edit Team permissions.
The SIP Header name validations in the Call Control block are improved.
The Registration Interval input is removed from release modals.
Fix the issue where the Release status notification does not sync with the instance state.
Fix the issue where creating an intent and adding it to a group from a block creates duplicated views.
The LLM Slot extraction is working parallel with other slots!
Fix the issue where Dialogflow-based releases are not communicating with API Key.
Fix the issue where the text snippets order settings were not working properly.
Protect the production release version
A project version assigned to a production release is no longer editable without Production Release Access permission.
Masking sensitive data/secrets in analytics transactions data
Added jumping functionality in the debugger
Added new TTS voices
Ada Multilingual (en-GB)
Arabella Multilingual (es-ES)
Alloy Turbo (en-US)
Echo Turbo (en-US)
Fable Turbo (en-US)
Onyx Turbo (en-US)
Shimmer Turbo (en-US)
Add support to enable recordings for releases (internal via admin portal)
Dialog loops are now visible in the debugger
We improved the UI/UX of the debugger
Fix the issue where multiple enter key presses create duplicate versions.
The Full Feature Base Template (FFBT) is a comprehensive framework designed to streamline the development of your Parloa RBA (Rule-Based Automation) bot. By leveraging the FFBT, you can accelerate your implementation process, as it includes a wide array of pre-built, frequently used functions and features in a ready-to-use bot flow.
Stay tuned for updates, as we continue to enhance the FFBT with numerous improvements and new features from our development backlog.
Fix the issue where the MLB modal is not rendered correctly.
Fix the issue where the invitation buttons are rendered incorrectly.
Fix the issue where the cursor jumps to the end in the normalized list slot values input.
Fix the issue where the close button in the entity modal, closes the entire application when opened in a new tab.
Audit logs iteration 2:
Project action logs
Release action logs
Reset options for strategy selection in text snippets.
Opening and closing release settings without making any changes removes the last modified column value.
Importing a project with a duplicated name gives an unclear error message.
Fix that a modification of custom-STT settings caused the NLU training to fail.
Fixes the issue where Input fields of variables in a GenAI block carry the values to the next input until the sidebar is refreshed.
Fixes a problem where in edge cases intermediate responses were played unexpectedly.
New Language and Azure Voices:
Slovenian
Bosnian
Fixed the issue where importing or creating a project from templates does not display intent groups.
Fixed minor UI inconsistencies.
Fixes an issue where DTMF Intent shows an Error Message when LLM Intent Classification is activated.
Fixes an issue where service block branches and variables are not displayed if the service description is too long.
Fixes an issue where the Date Filter in the Training Screen does not filter properly.
Updated the design of speech assets Intents scene for an improved user experience.
Slots and storage variables are now iterate-able in JavaScript snippets.
OpenAI Text-to-Speech via Azure is now available to be used in projects! For more details, see: OpenAI via Azure (Preview)
Fixes an issue that results in invalid training data, caused by providing multi-line utterances.
Fix in sub-dialog scene where sub-dialogs show an incorrect connected state.
Fixes an issue where cached values for external service calls were returned, even after the customer disabled the caching feature for a service.
Adjusts prompt structure to improve the experience with multilingual voices by preventing accent switches.
Fix the issue where the UI inconsistencies of font style, row color, and button visibility for the selected intent state.
Fix the issue where the Analytics Test Request Button wasn't working for existing releases.
Bring consistency to the Add buttons notation.
Fixes unexpected changes in analytics web-hook password after editing a release.
The PII data is pseudonymized, PII data is replaced with realistic fake data
PII data that are anonymizing:
Person
Addresses
ID (of any kind, including phone numbers, credit cards, license plates, IP addresses, internal IDs, ID card numbers, booking numbers, etc. )
Dates (of any kind, including date of birth)
Email address
Utterance and slot extractions will be anonymized.
To see Context in the training scene requires certain permissions.
Fix the issue where pasting blocks in a series pops up the "Outdated information" popup
Fix the issue where deleting a block right after deleting a block performs an undo action.
Fix the issue where importing subdialog in some cases not displaying all of the imported intents in speech assets.
Fixed minor UI inconsistencies
The new Edit and Save method is implemented to service authentication to make it more error-proof.
The Tooltip of Local logic in blocks is updated to reflect that the no-input branch is only for phone 2 releases.
Fix the issue where calling a bot with a long service connection twice in a row makes the bot jump to the start in the middle of the second call.
Fix the issue where the NLU State pop-up during training does not show the whole message.
Removed the SOAP format flag on the
Enabled the to return a 'position' field containing latitude and longitude
Improved the by fixing certain edge cases
Improved the by allowing to use a * character as the department setting
We now record audit logs for dialog events like adding, removing, and editing blocks.
A new permission to view audit logs.
A new scene to view audit logs.
FFBT templates are added to the default template list.
New design for the projects scene!
The customer switch function is now integrated into the project's scene.
Profile settings have been separated from tenant settings.
Tenant settings have been relocated to the top bar.
Services are editable from a separate modal on the graph scene.
New azure voices are added!
Fixes the bug where the Hangup while Welcome event is processed leads to events with different conversationIds.
Fixes the bug where the storage variable (session) is set to the previous value in the following transaction.
Fixes the bug where the customer name in the top bar wasn't changing after switching customers.
Fixes the bug where incoming and outgoing links from the service blocks disappear after adding a branch.
Fix the issue where the "Last Modified" column order can no longer be changed.
Fix the issue where Whitespaces are allowed in SIP headers in the call control block.
Fix the issue where the import of nested subdialog in the same project referred to the original nested subdialog.
Fix the issue where removing the 2nd SIP credentials from the Phone2 release was not removing it.
Third Iteration of Audit Logs:
Intent actions.
Slot actions
Fix the issue where in rare situations a conversation is having multiple call ids
Enforced User Acceptance Policy.
The available platform list for prebuilt slots has been updated.
Fix the issue where editing a variable name does not change it from the storage block which creates inconsistency.
Fix the issue where renaming a variable makes it un-addable to the intent assignments scene.
Fix the issue where the child elements of storage blocks are greyed out.
Fix the issue where renamed variables are not working with triggering it.
Fix the issue where Editing Refer-To input in the call control block does not give audit logs.
Resolved an issue that some intermediate responses caused the final response to be played twice
Fixed an issue related to the caching of TTS results
New TTS Voice fr-FR-VivienneMultilingualNeural (Female) added
Note: If you're using the fr-FR-VivienneNeural voice for your bot, the voice will automatically switch to the fr-FR-VivienneMultilingualNeural voice
Resolved an issue from intermediate responses coming in after the final response causing a call to fail
2 new (de-DE) TTS voices added
de-DE-FlorianMultilingualNeural (Male)
de-DE-SeraphinaMultilingualNeural (Female)
Updated Note: If you're using the de-DE-Seraphina voice for your bot, the voice will automatically switch to the de-DE-SeraphinaMultilingualNeural.
5 new (en-US) TTS voices added
en-US-AvaNeural (Female)
en-US-AvaMultilingualNeural (Female): bright, engaging female voice with a beautiful tone
en-US-AndrewMultilingualNeural (Male): warm, engaging male voice that sounds like someone you want to know
en-US-EmmaMultilingualNeural (Female) : friendly, light-hearted, and pleasant female voice that works well for education and explanations.
en-US-BrianMultilingualNeural (Male): youthful, cheerful, and versatile voice well-suited to a wide variety of contexts.
Added warning message that providing relative dB volumes in prosody SSML tag is not available for Phone 2 releases
Error messages shown during project export now contain more information
Subdialogue import does not work on Windows computers
SSO/OIDC integration now utilizes the auth_info
endpoint to request missing user information from the id-token
.
Fixed a UI bug where the headers in the Intents Platform settings appeared too small.
Resolved an issue where the autocomplete pop-up did not fully display names of long text snippets, ensuring complete visibility.
Custom session variable lifetime live!
Fixed a UI bug where deleting a text from a text-pool in the text-snippet scene, results in an inconsistent UI state. It looked like the wrong text had been deleted.
Corrected a bug where Slot Regex validation accepted improperly escaped regex statements.
Fixed an issue where the JavaScript info windows in storage blocks placed under the Variables cards were not functioning correctly.
We are upgrading to GPT-4 Turbo for improved performance with reduced latencies.
In Parloa UI, passwords, tokens, and API keys will now be hidden by default for improved security.
Resolved an issue where the status modal in deployments was not scrollable.
is live!
Resolved an issue where the utterance collector queue did not properly escape regular expressions, resulting in job failures. This fix ensures that all special characters in regular espresssions are correctly escaped.
Fixed an issue that caused the Large Language Model (LLM) intent recognition to return intents which were not part of the speech model.
Resolved the issue where, following the saving of a project as a new version, assigned services were becoming disconnected from their corresponding service blocks.
Fixed the issue where selecting a variable for a service block was reshaping the input incorrectly.
Addressed the problem where Autosave would spam error messages if a JavaScript statement in a text snippet condition contained syntax errors.
Resolved an issue where attempting to create a new version in large projects with many existing versions would result in a 401 error, despite the version being created successfully.
Fixed the bug where, after ignoring or adding all the data on the last page of a training scene, the previous page did not automatically load.
Corrected the issue where the information (i) icon in the profile settings would show nothing if there were no descriptions in a role.
User-ID Based Rate Limiting – Introduced user-ID based rate limiting to provide improved access for individuals on shared networks, optimizing the user experience across diverse usage scenarios.
Fixed issue where the Export function was not working for large projects.
Fixed an issue where RegEx error occurred in NLU response due to 'bad character range' while parsing text.
Fixed an issue where some pattern of Regexes are giving Unterminated group error and do not save the RegEx.
Fixed an issue where Conversation History API returns Bot messages which were not sent to the customer.
Fixed an issue where Missing EndConversation and HangUp Events in conversation data.
: Added support for encrypting environment variables.
: By default, intermediate responses are now always played to the caller. Bot builders have the option to enable skipping these intermediate responses if the final response is available.
(formerly known as LLM FAQ) is now configurable through GenAI Block!
Fixed a bug where the PARLOA.EndConversation
intent incorrectly appeared in the conversation context following an SSML error.
Addressed an issue in Text Snippets where condition IDs were incorrectly returned as null, preventing their separate selection.
Added a new column in the Releases scene to display the last editor of a release.
Updated texts across various sections for improved consistency.
Fixed the issue where changing the tenants was not clearing template caches.
Fixed the issue where the example in the Number tab of the storage block was showing the same example as the Javascript example.
Resolves an issue in Phone2 releases by ensuring Language Model (LLM) intent classifications take precedence over Natural Language Understanding (NLU) classifications.
Fixed an issue where using the '&' sign in any context incorrectly converted it to 'und'. It will now correctly translate or convert according to your dialog's language.
Enhance your bot's understanding of user intents with Parloa's new LLM Intent Classification feature! for more information.
Parloa provides the following authenticate methods:
Overview of Parloa and its capabilities
Begin your journey in Parloa by starting a new project. Each project is a gateway to building sophisticated Dialogs.
After signing into Parloa, you will see a list of your projects, displayed based on your account permissions. Each project encapsulates all the elements of a Dialog, constructed from Blocks, which are the fundamental components of a conversation.
Key Features:
Project List: Keep track of your projects with details like names, last modification, and creation dates.
Search Bar: Quickly find projects using relevant keywords.
Import Project: Seamlessly incorporate external projects into Parloa.
Add New Project: Effortlessly initiate new projects within the platform.
Pagination: Navigate through multiple projects with ease.
Each project in Parloa contains Dialogs, which are constructed from Blocks. Blocks are the building blocks of any conversation, enabling you to add functionality.
This is where your creative process begins. The top menu bar displayed the following components:
The Environments tab enables you to test your Dialogs in various settings, mimicking real-world conditions. This step is key to refining your Dialog and ensuring its readiness for users.
: Visualize and manage your dialog flows.
: Use predefined text for different scenarios.
: Enhance the Natural Language Understanding (NLU) Model with intents, utterances, slots, and more.
: Store data like user preferences within your dialogs.
: Handle complex interactions within your main dialog.
: Connect your dialog to external systems and databases.
Releases: The Releases tab in the window is your go-to for publishing your project. Here, you can add, search, or update a release, choosing the platform that best fits your needs.
Training: Enhance the accuracy of your Dialog's speech recognition. The tab enables you to add misidentified cases to your training dataset, improving the model's performance in future iterations.
Debugger improvements now allow the loading of older transactions in the debugger.
Resolved alphabetical ordering issue in text snippets.
Addressed connectivity display errors in certain subdialogs.
Introduced support for login using external Identity Providers via OpenID Connect.
Fixed issues with sorting conditions in text snippets and creating new conditions.
Enhanced login options with limited support for external Identity Providers via OpenID Connect.
Upgraded debugger performance and increased transaction display to last 50.
Updated UI: Changed deactivate user button color, prevented multiple clicks on export button, and improved intent and response generation with GPT-4.0.
Modified session reset in debugger to retain training context.
Set refresh token lifetime for users to 7 days.
Added 7 new Azure TTS voice options:
de-DE-SeraphinaNeural (Female)
es-ES-XimenaNeural (Female)
fr-CA-ThierryNeural (Male)
fr-FR-VivienneNeural (Female)
it-IT-GiuseppeNeural (Male)
ko-KR-HyunsuNeural (Male)
pt-BR-ThalitaNeural (Female)
Addressed various UI issues including scrollability in version dropdown, extended view in debugger, and focus issues in large lists.
Improved stability and performance in editing and deleting intents.
Launched Global Voice Settings for SSML prosody adjustments.
Addressed deployment issues with Dialogflow debugger and project-specific updates.
Added new Azure voices for English (US):
Andrew
Brian
Emma
Updated account security with Captcha and maintained training context during session resets.
Resolved the issue where the 'Last Modified' column did not accurately display the latest modification time on the project page.
Fixed the problem with the TextArea in the note sidebar not expanding as expected.
Corrected the non-responsive right side of block items in the interface.
Addressed the partial triggering issue in dialog deployments, ensuring full functionality.
Resolved an issue preventing the creation of new versions.
New Branding is live!
Upgraded utterance generation to GPT 3.5 Turbo.
Updated Azure TTS voice list for enhanced voice options.
Fixed deletion issues in bulk edit panels for intents and dictionaries.
Addressed an API error where conversation history included unsent messages.
Corrected the swapping of external services blocks after cloning a version.
Enhanced JSON validation for Textchat 2 JSON-Response Elements.
Fixed issues with large training data in edit modal.
Resolved JSON payload validation to accept only objects.
Multilingual bots are live!
Discontinued Google Assistant support.
Included intermediate responses in the conversation history API.
Updated UI: Enabled cross-platform use of Phoneme SSML and improved modal warning prompts.
Enhanced conversation analytics with additional data points.
Applied the first version of branding with updated logos.
Expanded language support and adjusted context limits for TextchatV2.
Updated default templates with new DTMF Intent and slot.
Addressed various issues in text snippet scene and JSON Payload modal.
Improved UI elements and fixed bugs related to conditional entries.
Resolved an internal server error when saving new versions containing text snippets.
Enabled copying and pasting of subdialogs within the same version.
Updated Parloa support email for better customer service.
Increased the import project limit to 25mb for enhanced project handling.
Modified Dispatch PARLOA.EndConversation to trigger only for pre-existing intents.
Fixed debugger window closure issue on "show block" action.
Resolved error pop-ups being obscured by the status modal.
Addressed SSML Emphasis function returning an empty list.
Corrected exporting inconsistencies in project versions.
Fixed UI issues with condition block editing and project version display.
Introduced AI-driven generation of utterances and alternative responses/prompts.
Enhanced analytics to display intermediate responses.
Added "," as a separator in slot synonyms for more flexibility.
Improved search functionality across Parloa tables.
Upgraded acoustic echo cancellation with customizable filters per release.
Fixed issues with utterance duplication in intents.
Resolved sidebar closure problems when deleting services.
Corrected automatic deployment issues in changed environments.
Addressed response playback errors after long intermediate responses.
Fixed call disconnection issues and expanded REFER number target capabilities.
Introduced the Intermediate Response block for pre-service call prompts.
Enabled custom Azure STT integration in bots.
Fully supported the Chinese language.
Implemented a hangup event for precise call termination notifications and actions.
Enhanced the Continue Listening feature for improved user interaction.
Corrected number storage variables returning as strings.
Addressed carryover issues in Text Snippet conditional entries.
Added Call Forward feature for Phone 2 releases, including caller ID override.
Enabled dialog version access in Data Analytics.
Introduced support for Japanese in TextchatV2 and PhoneV2.
Fixed the issue of empty stt.candidates
returns.
Addressed the missing error message display for defined utterances.
Resolved the issue of the debugger not displaying information on selected steps.
Implemented a new native “Number” type in storage blocks.
Expanded language support to include Thai in TextchatV2 and PhoneV2.
Enhanced Call Control blocks to strip whitespaces in numbers and headers.
Improved UI to indicate duplicate values in slots (bulk) editor.
Addressed slow rendering of long utterance lists.
Corrected the assignment of the debugger environment to a release.
Improved service block connection functionality in the services tab.
Enhanced error handling for failed SIP connections.
Fixed long project names affecting header tab visibility.
Resolved saving issues in Bulk Edit modal for slot synonyms.
Corrected issues with rapid changes not being saved properly.
Enabled the selection of specialized STT models for specific states, initially available for German spelling.
Introduced the ability to access multiple slot matches through slotsAll.[entity]
and multiple STT results via stt.candidates
.
Enhanced training data augmentation to effectively handle inputs with diverse capitalizations.
Fixed the issue where typing a period in text snippets caused the cursor to jump to the end of the text.
Fixed the problem where refreshing the Subdialogs scene incorrectly redirected users to the Graph scene.
Corrected the saving of intent confidence thresholds, ensuring values are accurately set.
Addressed issues in the Bulk Edit of utterances, ensuring that canceling properly discards changes with duplicated utterances.
Resolved the issue where the mouseover on Slot variables in autocomplete incorrectly displayed a “Slot not found” message.
Implemented additional minor UI improvements and internal component optimizations for a smoother user experience.
Resolved freezing issues in the training scene during text interactions.
Made call connect ringing optional.
Streamlined project deletion process with warnings and confirmations.
Improved the handling of analytics data sending.
Enabled the use of the apostrophe in slots for French phrases.
Fixed regex slot issues and scrolling problems in text snippets.
Addressed analytics and training view issues.
Enhanced display of full entity names in the graph UI, along with additional descriptions.
Updated list of supported languages in the UI.
Enabled responsive text editing features.
Modified to show the 'Saved!' message only for relevant actions.
Implemented 'echo cancellation' using ffmpeg.
Streamlined process by removing separate messages upon deleting a DTMF.
Added the ability to filter STT-supported languages and voice styles based on user input, and improved sorting of voices for the selected language.
Introduced an option to make ringing optional during call connection.
Resolved an issue where storage and variables starting with '_' were not callable via keyboard but only via the plus symbol.
Fixed a bug where adding list values after an empty normalized value would not save them.
Addressed a login issue for new customer invitations received via email, where the 'add this customer' modal was not displayed.
Corrected the TTS functionality to fall back to the default speech subscription if the custom one fails.
Display of DTMF values in the conversation sidebar.
Updated text for the Routing Quality slider.
Inclusion of an OutOfScope Intent in each template, along with template updates.
UI fixes when switching tenants.
Resolution of empty slots in intent.rawPlatformRequestBody
for fallbackIntent.
Increased width of conversation bubbles in the training sidebar.
Updated icon for TextchatV2 to prevent incorrect response selection.
Specified Call Forwarding as valid only for Phone 1.
Assigning unique IDs to each call with a masked number.
Fixed issue with the sidebar getting stuck.
Adjusted the width of the 'Accept invitation' screen.
Resolved the reuse of registered phoneV2 credentials.
Addressed issues with NLU training.
Correction of cropped descriptions in the release table.
Synchronization of phone state in Release and Debugger scenes.
Extended and unified instance states for Release UI.
Inline slot examples no longer added as normalized slot values.
Added frontend validation for phone 2 SIP URI.
Resolution of issues with saving more than 200 values from bulk edit in list slots.
Updated runtime engine timeouts to support longer-running services, including ChatV2.
Fixed an error when editing slots that caused an error message due to duplicated slots.
Enabled parallel conversations from the same number.
Updated Duckling to v1.10.0.
Fixed display of analytics endpoint setting from a deleted release.
Addressed issues in the training sidebar with ChatV2 responses.
Resolved dropdown misalignment in the training view.
Fixed status issues with migrated phone gateway pipelines.
Standardized the SipConfiguration id field.
Addressed a display issue with text in speech bubbles in the training scene.
Improvements to Textchat2 to accept context information for events.
Fixed issues with deleted intent utterances not being reusable.
Resolved SSML break tag issues and direct auto closing.
Reset Alexa session in Debugger to create a new debugger skill.
Resolved an issue where a closed/changed response reopened after saving.
Fixed a renaming issue with intents via the graph.
Addressed crashing issues in the speech asset scene while opening a slot list.
Resolved user inability to remove the error branch from a service.
Corrected rendering issues for bot responses in the conversation sidebar of the training page.
Fixed saving issues with text responses in state block.
Resolved editing issues for services before providing a URL.
Addressed visibility issues of removed entrances of subdialogs in the graph scene.
Corrected cursor issues in the response autocomplete.
Fixed display issues of child elements and blocks not showing ports initially.
Changed the order of the fallback intent.
Added debugger functionality for Chat.
Swapped 'Back' & 'Cancel' buttons in the Release modal creation.
Made bottom buttons on modal visible on smaller screens.
Kept modal open when clicking outside.
Added a confirm button next to the input field.
Extended training data to reduce the effect of capitalization.
Fixed an error "Synonyms must be an array" when uploading Dialogflow models.
Resolved issues with SSML Prosody Rate not providing sensible defaults.
Addressed validation error display issues when downloading speech assets.
Fixed issues with 'New Condition' not opening the sidebar.
Resolved display issues with Textchat JSON Payload stating "The defined response is empty."
Fixed validation issues with utterances.
Addressed an error "Synonyms must be an array" when uploading Dialogflow models and list slot page crashes.
Fixed error messages when adding an intent in the training view.
Resolved an error when starting training "Aliases must be an array."
Fixed an issue where editing one slot inadvertently changed another slot with the same normalized value.
Resolved an issue where Kamailio manager client reloaded the dispatcher table instead of removing an RPC command.
Fixed a problem with RPC commands running in non-blocking mode, leading to undetectable promise rejection.
Fixed intent ranking import.
Changed the default value for END_SILENCE_TIMEOUT.
Prohibited adding the same release credentials multiple times.
Introduced a generic Chat API via Rasa: "Textchat 2".
Added new German bot voices.
Fixed an issue where the service branch ERROR was incorrectly flagged.
Resolved discrepancies in bulk edit slots showing different values than defined.
Corrected uWSGI encoding set to ASCII.
Fixed a bulk edit functionality issue in list slots.
Added a public endpoint to test the NLU.
Enhanced the debugger UI design.
Introduced an intent confidence variable.
Addressed cursor jumps in the regex value edit field.
Resolved issues with an empty Monaco editor.
Fixed a problem where duplicated utterances could be added to other intents through bulk editing.
Corrected an NLU issue causing an 8-second delay on the first interaction.
Fixed an issue with handling PARLOA.DTMF.
Resolved a type error that caused Parloa to crash.
Added GPU usage for the Training Model.
Set storage allocation when an intent is triggered.
Introduced intent groups in the graph.
Added debugger for routing dialog set storage.
Launched Azure Marketplace: Transactional Offer.
Resolved an issue with the intent group dropdown not functioning properly.
Addressed a problem preventing Operator from updating Kamailio dispatcher when scaling PGs.
Added NLU optimizations.
Fixed an issue with exporting all versions of the project at once.
Addressed an issue to not wait until the archive is completely streamed.
Fixed an attempt to prevent project export from blocking the event loop.
Resolved an issue where Mizu sent BYE before REFER was accepted.
Fixed an issue where removed pipelines were still connected with the bot.
Corrected the DID validator function having an initializer.
Added an option to exclude UUID from DID validation.
Updated the EN Template.
Added multiple text bubbles to the training scene for multiple text response elements.
Fixed an issue with editing the state block creating a 'ghost response'.
Resolved an issue where adding '<>' in front of SSML text erased it.
Corrected the context button giving the wrong error.
Fixed Regex slot being returned as a number.
Resolved an issue with WhatsApp releases not setting session timeout manually.
Added text query as autocomplete placeholder.
Training now shows all intent confidence without fallback threshold or relevant intent.
Implemented only one spinner in the debugger for training.
Added timing in the debugger.
Ignored "Parloa shows connection lost" notification.
Added password validation for at least 12 characters.
Fixed pagination dropdown to show more than 10 projects per page.
Addressed the issue of the graph switching the zoom level after refreshing.
Resolved issues with the customer list sometimes being empty.
Ensured that prosody always contains a field.
Fixed an error when adding a service input value to the service block and refreshing the browser.
Resolved slot values being deleted in utterances after switching tabs.
Fixed a 401 error on status call on the login page.
Changed the shutdown time for running pipelines to weekdays and weekends from 0:00-6:30.
Resolved an issue with outdated information error messages until the page is refreshed.
Introduced a "night mode" for the cluster which scales down all DEV resources at night.
Fixed the Kamailio-manager crash due to missing packages.
Added local error branch via UI.
Decreased zooming speed using the mouse wheel + cmd.
Aligned all tables to have the same height.
Improved speed of adding notes in the note block.
Resolved an issue where training data was still visible after 30 days.
Addressed long description release names not fitting into the table layout.
Fixed an issue where Phone1, Text chat, WhatsApp environments shouldn't be available for the debugger.
Resolved a 502 Bad Gateway Error message while creating a release.
Corrected a problem where a cached badge was shown while the 'Cache Service Response' toggle was off.
Fixed an issue where the user couldn't create a phone 2 release with a peering pipeline.
Fixed an issue where the user couldn't set a fixed value in the service block.
Ensured that service has only assignment to storage if the storage exists.
Imported delay, debounce, setTimeout durations from a source, instead of typing different numbers.
Provided an optional way to connect to MongoDB via TLS.
Improved communication when service results were cached.
Restricted SIP username to "-" "_" and alphanumeric characters.
Published Duckling 1.6.0.
Monitored one VoiceOn bot with E2E Monitoring.
Created Infrastructure pipeline.
Fixed service timeout issues.
Resolved an issue where the user couldn't delete more than one intent at once.
Addressed errors returned in instance status calls not being rendered to users.
Added an autoscale to Peering Phone Gateways.
Fixed an issue where subdialogs were not opening from the list.
Migrated to react-router-dom v6.
Upgraded FE package.json from exact versioning to caret versioning.
Addressed an issue where utterance search queries were stored across projects and customers.
Fixed a problem where sub-dialog exit with leading emoji was not showing the name.
Resolved missing information on deployment failure.
Fixed an issue where the search for utterance was missing.
Handled exceptions in CompletableFutures on incoming calls.
Fixed issues with service timeout and retries that were not executed.
Resolved problems with enhanced Alexa instances not returning the correct production instance.
Increased replicas in Duckling HPA.
Fixed duplicated utterances in the training scene.
Addressed an issue where using SIP refer should set endTalk to true.
Fixed broken sign-up functionality.
Addressed missing transactionId in the analytics hook request.
Merged all NLUs into one NLU service/deployment.
Defined Intent Groups in Speech Assets.
Removed the possibility of configuration via service variables.
Added graceful termination of phone gateways.
Introduced configurable service timeout per service definition.
Displayed in the debugger when cached service results were used.
Updated the UI screen to show only when necessary.
Canceled transitions on sidebar opening.
Fixed an issue where, after migration, the assigned role of the initial user was switched to editor.
Resolved a problem where the user was able to destroy a dialog with a set attributes block.
Addressed an issue with flickering scrollbars on Windows and HD Screens.
Added NLU backups.
Increased NLU Kubernetes health check timeout.
Fixed NLU restart alerts.
Implemented Redundant NLU.
Fixed an issue where the NLU model watcher threw an error when trying to load a new model.
Extended logging for phone-gateway and worker containers.
Resolved an issue with the sign-in rate limit not being reset.
Fixed an issue where the operator did not update stateful set pods.
Resolved a missing property "method" in service API calls.
Added generic-worker improvements.
Removed the feature to select an entire Utterance line.
Allowed the environment to be reused in multiple phone gateways.
Updated NLU Model leading to only short interruptions.
Enabled configuring service timeout and implemented retries.
Disabled DialogFlow interaction logging by default.
Restored lost trained NLU models from external storage like Azure Blob storage.
Refactored frontend API files.
Added persistence to the alert manager.
Implemented a script to synchronize the latest model between NLU replicas.
Fixed an issue where phone bots rang for a while before a call was picked up.
Resolved a problem with password policy not being enforced by the backend.
DTMF will be sent in multiple event requests for inputting '#' without having input any number.
Limited the barge-in of DTMF to a window starting after the prompt plays.
Ensured listening to DTMF even if the bot is speaking.
Set the finish of DTMF input with the '#' sign.
Addressed an issue with Alexa: Instance Switching not being executed.
Extended logging of the instance switch.
Re-enabled TwoLvl Cache and removed the settings from global configuration due to a different cache implementation of API and engine.
Updated red lines every x seconds for bulk edit utterances.
Updated Sidebar Inconsistencies.
Fixed New Intent Type Selection Padding.
Fixed block tag font issues.
Added extra permissions for Production Releases.
Improved weak password policy for 'TÜV Süd'.
Cached service responses.
Default upper value of back to 100% confidence in the training scene.
Added UI for Service caching.
Upgraded to Kubernetes 1.20.
Changed instance type.
Extended CallMetaInformation and the dialog hook endpoint by a production flag.
Implemented retry of TTS requests if the endpoint is unavailable/exception thrown.
Operator handles switching from gRPC to REST.
Deployed Elastic APM to our cluster.
Fixed an issue with importing projects using Windows OS.
Resolved no reaction to user input + no re-prompt from the bot during conversation.
Addressed concurrent HTTP requests to NLU throwing an exception when calling timer.end.
Fixed not updating the pipeline if only the version in the instance changed.
Resolved Alexa publish-instance endpoint returning no instances.
Fixed TTS error not shown in the debugger.
Corrected 'heplify' not capturing packets.
Addressed welcome event starting listening to the caller before handling response.
Fixed break SSML not working as expected.
Resolved crashed UI when clicking on a certain "response by phoneV2" debug element.
Fixed deactivated pipeline issue.
Resolved UI fails to show the status of phone pipelines.
Set necessary resources for the peering pipeline.
Fixed the calculation of the service response cache key.
Included input variable names in the key calculation.
Ensured every cache key has the prefix service-response-.
Prevented type error during check deployment status.
Prevented non-relevant pipeline attribute changes from triggering a pipeline change.
Added missing cache dependency in the Kamailio manager.
Rolled back HTTP route changes that allowed reaching the engine health-checks from the outside world.
Restored accidentally removed alertmanager's azure-production overlay.
Fixed an issue with the gRPC connection.
(Continued in the next message...)
(Continuing from the previous message...)
Set default confidence in the training scene to 0 - 90%.
Activated search function of intent-dropdown in the training scene.
Saved pagination settings of the training scene.
Changed popover area and value of storage variables in the debugger.
When creating a new Phone2 release, values of platform settings are now saved during the creation process.
General UI fixes (typos, paddings, highlighting).
Fixed an issue when adding utterances via training scene, parentheses are filtered.
Fixed a bug where the autocomplete popover didn't open when hovering over a variable or tag.
Fixed an issue with the call control block sidebar.
Merged the dialogs-edit permission and the projects-delete permission into a new projects-edit permission.
Changed the behavior of adding the fallback intent to the intent ranking.
Fixed a bug that showed an old response in a new state block.
Fixed a bug where the reprompt of an Alexa response was overwritten by the prompt.
Fixed the clear button of the description inputs.
Fixed the analytics hook switch in the instance modals.
Fixed a bug in a continued listening block where the prompt was played instead of listening to the user.
Fixed an issue in the training scene where utterances were collected multiple times due to case sensitivity.
Fixed a bug where a release could not be edited after importing a new project.
Fixed an issue with the speech recognizer.
Added call forwarding action for Phone 2.
Autocomplete in bulk edit only with "Tab".
Subdialog in the block can only be selected once.
TCP as default protocol for new pipelines.
Releases can now be marked as production.
Added missing Duckling slots to autocomplete.
Fixed bug with find block button.
Fixed misleading messages.
Fixed closing behavior of sidebars.
Smaller UI fixes.
Improved message for deactivated users.
Removed Drop Phone Calls Modal for platforms other than Phone2.
Fixed wrong password validation messages.
Improved information on deployment failure.
Updated behavior of the end-to-end monitoring tool.
Updated empty templates.
Added possibility to add descriptions to entities.
Standardized utterances (bulk edit/list) to be case-sensitive.
Fixed a bug where the user got stuck in a continued listening block.
General fixes of permissions.
General UI/UX fixes.
Fixed a bug that prevented the registration of phone pipelines.
UI Improvements.
Refined validation of utterances.
Added a button to jump to the start conversation node in the graph.
Updated DE + EN Empty Template.
Added another backup strategy.
Updated E2E Monitoring strategy.
Fixed several UI issues.
Fixed issue with disappearing SIP pipelines.
Fixed issues with autocomplete navigation.
Fixed issue with menu navigation.
Fixed crashing bulk edit modal.
Fixed missing resource requirements in NLU training jobs.
Changes in SIP registration process.
Fixed issue with instance creation.
Fixed issues with activation of instances.
Fixed issues with the training job.
Fixed release edit analytics hook issues.
Fixed overflowing note block.
Fixed broken speech model validation.
Fixed errors in the migration script.
Added the possibility to create multiple pipelines of a release.
Intent confidence is no longer normalized. The change takes effect when the model is trained next.
Added possibility to download Phone 2 Rasa speech models.
Allowed choosing Belgium as a location for Dialogflow-based platform bots.
Split status of a deployment into its components (NLU/Phone-Gateway).
Automatic failover if STT service is down.
Preselected slot type can now be cleared.
Error in call control renamed to unsupported platform.
Various UX/UI Improvements.
Updated Kamailio DB on release creation and update.
Fixed issues with continue listening.
Fixed issues with STT hints.
Fixed a visual issue cutting labels of child nodes in the graph too early.
Fixed issues with regex slot value lists.
Fixed an issue with Automatic upload to DF when using an ML slot in the speech assets.
Fixed a UI bug with entrance and exit blocks in sub-dialogs.
Fixed a bug where the input of a continued listening block was used twice.
Increased JSON body limit to 4MB in Parloa API.
Added the environment variable JSON_BODY_LIMIT
to override the default JSON body limit of 4MB.
Enabled Prometheus alerts and notifications.
Fixed frontend default port being set to 80.
Applied fallback on all relevant intents separately.
Corrected names for relevant and all intents placeholders.
Ensured fallback does not remove all intents from ranking.
Fixed problem with import of large projects.
Added relevant intents and all intents ranking placeholders.
Fixed open connection in Redis.
Implemented Azure migration-related changes.
Fixed STT hints not being correctly set during a call.
Fixed NLU model loading issue with Rasa.
Addressed an error with Redis.
Table UI improvements.
Only selected built-in intents will be considered by the NLU.
Updated entity creation flows and headers.
Several improvements to the dateTime built-in slot.
Large transactions will be truncated before saving.
Fixed errors not shown if Alexa invocation name validation fails.
Fixed bug in built-in slot selection.
Fixed issue where "Continue listening" does not capture input after continuing to listen (English).
Fixed the Phone Gateway not retrying speech synthesis with stripped SSML tags if the provided SSML was invalid.
Added fields to pipeline settings.
Added user guidance to improve the speech model.
Added the possibility to enter a move-only mode in the graph when the user presses the space bar.
Added the possibility to deactivate a Phone 2 pipeline.
Improved the validation of Alexa invocation names.
Set default variable lifetime to session on creation.
Fixed a bug preventing a Dialogflow deployment when using an ML slot in the speech assets.
Fixed an issue where the pipeline is shown as offline even when it's online.
Fixed an issue in speech model generation if synonyms were used in a list slot.
Fixed an issue with the deletion of variables in a storage block.
Stopped excessive animations in the release scene.
Fixed audio speeding issues in Phone 2 releases.
Rolled back to Mizu Version 8.6.21068.
Fixed looping pipeline reconciliation.
Converted audio to 16kHz before sending to Azure STT.
Allowed setting endpointIds of Azure Speech Recognition models.
Updated Duckling for better slot extraction.
Added a warning if a slot value is in multiple slots.
Shortened timeout for continue listening.
Set the default intent threshold to 30% instead of 0%.
Renamed "To Prev. State" to "To Previous State."
Changed "Save as Version" to a secondary CTA.
Fixed visibility of utterances in the training scene.
Updated Level-2 Navigation headers.
Updated SSML Attributes of Break Tag (either time or predefined value).
Unified behavior of modals.
Warning added when changes to the release might lead to dropped calls.
Fixed issues with the autocomplete input.
Fixed an issue causing a white screen of death (WSOD) when logging into Amazon in Alexa release after changing user.
Fixed an error where a user could not change their own password.
Set re-prompt duration to 10 seconds.
Fixed Mizu None Bug.
Added a fallback strategy if a TTS service fails.
Added the possibility to change the TTS voice of Phone 2 releases.
SSML errors will be handled gracefully in the engine.
Updated some colors of headers.
Updated the stencil sidebar.
Added a border radius to the extended sidebars.
Reduced some paddings in the navigation.
Added a title in the deployment navigation header.
Updated the color usage.
Changed wording in change e-mail form.
Added information why a conversation in training cannot be shown.
Added a warning if the session length is configured to be greater than 20 minutes.
Saved the routing level per project.
Provided better visual feedback if a slot type is not supported by a platform.
Added default attributes to SSML-Voice tags.
Changed the confidence level in training to a slider.
Fixed a bug which returned an Alexa production release twice.
Fixed issues where the menu does not expand back to default when there is enough space.
Fixed an issue that captured the user in a Monaco editor.
Fixed a bug leading to a crashing frontend when a choice of a service isn't a string.
Improved validation of customerIds in admin panel.
Fixed a problem where the database was OOM killed due to heavy queries.
Set the re-prompt duration for all phone-v2 releases to 10 seconds.
Set the EndSilenceTimeoutMs for all phone-v2 releases to 0.
Fixed the issue that Alexa.Presentation.APL.UserEvent events weren't being responded with re/prompts and cards.
Ensured retry of MongoDB read/write requests on failure.
Faster page load for large utterance lists.
Fixed issue with "to continue listening" block resulting in wrong concatenated text after a NoMatch was returned.
DTMF Intent is hard-wired to DTMF Slot now; creating/deleting will create/delete the slot/intent as well.
Fixed issues with copying of nodes.
Fixed buggy telephone number input.
Fixed rendering issue in release creation modal.
Fixed issue in Phone 2 which made the bot hang up early at the end of a conversation.
Fixed issue when setting service branch names.
For Phone 1, the Dialogflow Service Key for Tenios now requires the permission actions.agent.get.
Added possibility of grouping sessions from different instances.
Changed Rasa configuration to "balanced batch strategy."
Used the last intermediate result if a NoMatch is sent.
A NoMatch with an empty intermediate result will not reset the timer for the reprompt anymore.
Lengthened the time before the Reprompt is played by 3 seconds.
Fixed an issue where analytics were sent to an old URL when using Alexa live releases.
Fixed issue with environment sidebar.
Fixed focusing issues with speech asset sidebar.
Removed DTMF slot from utterances autocomplete.
Fixed role selector in profile scene.
Fixed an issue causing neither further events nor a Reprompt to be triggered after a continued listening block.
Updated transaction query to prevent a crashing API.
Fixed a falsy validation of slot names.
Fixed missing context object in session records.
Added API replicas.
Reestablished the gRPC connection on termination.
Improved sorting of utterances in the training scene from newest to oldest.
Refined validation of speech asset names.
Added environment to all autocomplete fields.
Opened the sub-dialog selector by default if no sub-dialog is selected in a sub-dialog block.
Automatically removed unfinished autocomplete entries.
Made variables in debugger reducible and foldable.
Added duckling slot types to Parloa.
Fixed unset pipeline resource limits.
Fixed missing slot names in JS IntelliSense.
Fixed glitches in PhoneV2.
Fixed saving issues in text snippets.
Fixed overflowing dialog names.
Fixed filters in the training scene.
Fixed a bug with the autocomplete input that prevented getting the focus.
Made filtered component in training scene wrap.
Fixed wrongly rendered warning message in the release scene.
Fixed a bug with unescaped transactions.
Integrated homer/helpify to debug SIP messages.
Created a backup of the cluster before deploying as part of deployment scripts.
Fixed crashing API.
Addressed the issue of new pipelines not being created due to "Spaces inside the customerId."
Fixed the small dictionary sidebar.
Ensured Parloa does not unescape transactions in the transactions forwarder.
Fixed Dialogflow speech model containing slot name instead of role name.
Fixed sidebar widths.
Fixed deployment of new Parloa version scene.
Fixed setting container resource requirements.
Fixed non-scrolling sidebar.
Added the possibility to use dynamic entities via response merges for services.
PhoneV2 releases can now be edited.
Switched the phone (V1 - V2) platform icons.
Introduced surprises for the first release.
Moved slots to global.
Implemented new smart sidebars.
Aligned UI with guidelines.
Improved keyboard navigation in slot.
Made release updates forceable or scheduled.
Fixed incorrect state in releases.
Addressed issue with overflowing text snippet labels.
Added missing support of intent threshold for phone v2.
Fixed broken undo of responses.
Fixed issues with unreachable releases in debugger.
Addressed an issue with TLS.
Fixed saving of SSML attributes.
Fixed the deployment screen.
Introduced specific responses for the phoneV2 platform.
Automated filling of example slot values in training.
Implemented a uniqueness check for service names.
Fixed an issue where a re-prompt is fired while the user is speaking.
Addressed a wrong display message if the search for services does not deliver results.
Fixed issues with the call control block.
Removed unnecessary L2 - More menu item.
Fixed rendering of placeholders in SSML - audio tag URL field.
Secured MongoDB access.
This major version introduces the new Parloa PhoneV2-Pipeline.
Added Phone V2 Releases.
Enabled the addition of example values to slots.
Standard releases are now used for phone debugging.
Templates are stored in the DB and can be added via UI.
Added new element (refer) to the Call Control Block.
Included timestamp to conversations in the training scene.
Removed icons from the submenu navigation.
Fixed the issue of removing intents.
Fix issue where importing subdialogs does not reflect response names properly
Fixed minor UI inconsistencies
Fixed issue where editing releases were not saving the changes.
Releases: Defaulted to always hide Passwords, Authentication Tokens, and API Keys + option to change their visibility
Added an exact matching strategy to the NLU to ensure that utterances trained for a specific intent are always assigned the intent it was added to
Fixed issue where users were signed out frequently.
Log in using a unique username and password. Parloa employs traditional authentication methods to ensure the security of your data by encrypting your credentials.
To ensure optimal security, your password must meet the following criteria:
Be a minimum of 12 characters long.
Include numbers, letters, and at least one uppercase and one lowercase letter.
Incorporate at least one special character.
After ten failed login attempts, your account locks temporarily to prevent unauthorized access.
To access your account settings:
From the dropdown menu, click Settings and modify as necessary.
We are excited to announce the first release aimed at refreshing the look and feel of Parloa!
As Parloa and its brand are evolving, so is our interface. We have updated it with a new look and feel. While all functionality remains the same, we have made updates to key colors, typography, and shapes to better align with our brand. This will enhance the overall experience of using Parloa, making it more delightful.
This is an ongoing effort, and we plan to continuously improve and fine-tune the interface. We are keen to hear your feedback and thoughts on how we can further enhance your experience.
The following displays the before and after of the various updated elements:
We hope you enjoy the new design!
The Team tab enables admins to invite new users to their tenant.
A user receives an invitation link by email, and will be prompted to enter their name and choose a password. After creating their login, they will appear in the list with their designated user role.
In the Users tab, click on the user you want to deactivate. A panel on the right side of the page will open, displaying the user's information.
In the Users tab, click on the user you want to edit. A panel on the right side of the page will open, displaying the user's information, allowing you to edit their profile.
Previous role-based system had a single user management permission that encompasses all possible user management activities. With the new access management, we aim to ensure that admin users can provide access rights to their users as needed.
This documentation provides a detailed overview of the changes. Note that the default roles in the system will function the same as before:
Admin role: Will still have access to the new user management permissions.
Editor role: Will have view profile permission by default but not have access to the new user management permissions.
Viewer role: Will have view profile permission by default but not have access to the new user management permissions.
Admin users must ensure that their users have the required access.
The current "Manage User Permissions and Access" toggle in the Roles tab will be replaced with these five individual permissions:
Edit Profile
View Team
Edit Team
Full Access to Roles
Full Access to Invitations
Previously, users with roles that had the "Manage User Permissions and Access" permission disabled still had access to the Team and Roles tabs in the Settings.
After the introduction of the new permissions, these users will only see the Profile tab in the settings. This means they can view their own profile but cannot view other users in the system or have permissions to edit users/add/edit roles, and so on.
In Parloa, different user roles have varying permissions that define their access and actions within the application. Understanding these roles and permissions is essential for efficient management and collaboration.
The following is an overview of the available default roles and their corresponding permissions:
View Projects (enabled by default, cannot be toggled off)
Project Management
Dialog Management
Release Management
Environment Management
Release Debugging
Training Data Management
Production Release Access
User Management
In the Roles tab, click on the role you want to modify under the Name section.
Click the toggle buttons on the right to modify the role's permissions.
Type the new role's name, and press Enter to confirm.
Toggle on the role's permissions as needed.
New role successfully created!
Microsoft SSO enables you to access Parloa using your Microsoft account credentials, eliminating the need for a separate Parloa account.
To utilize Microsoft SSO with Parloa, ensure you have:
An active Microsoft account within Azure Active Directory.
Confirm with your organization’s Azure administrator if SSO for Parloa is enabled as per your organization's Azure settings.
Select Microsoft SSO during the initial sign-up.
Create a password for your Parloa account, providing an alternative login method.
Parloa enables disabling password-based logins, a recommended step for administrator roles to enhance account security.
To maximize your account's security:
Regularly update passwords and monitor account activity.
Microsoft accounts support Multi-Factor Authentication (MFA), also known as Two-Factor Authentication (2FA). Implementing MFA in conjunction with SSO provides an additional security layer.
IDP (Identity Provider) authentication in Parloa uses OpenID Connect, ensuring secure access for authorized personnel.
Enter your email and click Sign In.
After successfully signing in through your IdP, you will be redirected to the Parloa App.
To enable IDP authentication using OpenID Connect (OIDC) with Parloa, please follow these steps:
Name your versions with descriptive, unique titles that include dates and distinguishing characteristics. This practice enhances the visibility of your project's evolution and simplifies tracking advancements and modifications over time.
Leverage the Copy to Draft option to revert to prior configurations safely. This functionality clones your chosen version into the Drafts, while automatically preserving your active draft for later renaming. It's a seamless process that supports flexible version management and uninterrupted progress.
The home to all your dialogs
The Projects window is the home to all your dialogs. It is the primary page that displays all of your projects. A project serves as a container for your dialog and its associated definitions.
To open a project, simply click on it or hover over it to access options such as editing, exporting, or archiving. You can also sort projects by their attributes: Name, Last Modified, or Created, by clicking the corresponding column title.
You can return to the main Projects page at any time by clicking the Parloa icon located in the top-left corner.
The following describes how to manage your projects in Parloa:
Having at least one project language selected is a mandatory requirement.
The previous project permissions are now split further based on possible user activities possible within the system. This documentation provides an overview of the upcoming changes on project permissions.
The previous "Project Edit" permission is now split into five distinct permissions: "Project Management", "Dialog Management", "Release Management", "Release Debugging" and "Environment Management"
The previous "Edit Release Settings" is now associated with "Production Release Access"
The previous 'Debug (Phone 2) is now associated with "Production Release Access "
Training permission is now "Training Data Management". For accessing the Context in the training screen, ensure that the user has "Production Release Access"
The Export/Import Subdialog functionality, which was previously available to all roles, now requires the "Dialog Management" permission.
Note that the default roles in the system will function the same as before:
Admin role: Will still have access to all the new permissions.
Editor role: Will have access to all the new permissions except for user management.
Viewer role: Will not have access to any of the project permissions and user management permissions (they will be able to view projects by default).
For all custom roles, admin role must verify that their users have the required access.
Below are screenshots of the old and new permissions:
Project Management. Grants full control over project creation, modification, export, import, and deletion.
Dialog Management. Full control over dialog management, including manipulation of text snippets, intents, intent assignments, slots, dictionary entries, variables, services, graph editing, language addition, and version management.
Release Management. Permission to manage and configure non-production releases, including creation, updates, activation/deactivation, and NLU training.
Production Release Access. Permission to access, configure, and debug production releases. This should be used in conjunction with Release Management and Release Debugging permissions. Also includes the ability to view context in the training scene.
Environment Management. Permission to manage environments and their variables, including creation, editing, and deletion.
Training Data Management. Permission to add or ignore collected training data.
Release Debugging. Permission to debug any platform, excluding production releases
The permissions assigned to a role can now be restricted based on the specific activities the user performs within the system. For example,
Consider a custom role called 'project manager'. If you enable the 'Project Management' permission for this role, the user can add, edit, import, export, and delete projects, and nothing else. While this user can still view the projects they are invited to, they cannot perform any write operations on the project or its associated elements. Note that the importing subdialog into a project requires 'Dialog Management' permission
Similarly, for a custom role like 'debugger', providing 'Production Release Access' will allow these users to perform troubleshooting tasks on production releases without granting them permissions to edit dialogs, or any other project elements.
The Profile tab enables you to edit your personal account information:
Custom SIP Headers are implemented to call control block for both call forward and SIP refer!
Fixes the bug where the Debugger jumps to another position when navigating to a block.
The transaction cards will be highlighted in the debugger if they have any errors in them.
Fixes the bug where adding a customer to a user with an invitation link was not working.
New phone number input for phone 2 and phone 1 releases!
New Releases Scene design!
Fixes the bug where customers frequently forced to log in again.
Fixes that text snippets with the same name could be created.
Fixes the bug where copying & pasting blocks in newly created subdialogs gives the dialog not found error.
Click the arrow next to your name.
Under the Users tab, you'll find a list of all users in your tenant, along with their roles and status.
The Invitations tab displays a list of pending user invitations, including the invitee's designated role.
Click the button.
Click the button. The following displays:
Enter the new user's email address, and select their Role from the dropdown menu:
Click the button.
Click the button.
In the Roles tab, click the button. The following displays:
Follow best practices as outlined in .
On Parloa's sign-in page, select Single Sign-On (SSO) – A window prompts you to enter your email address –
Once you have gathered the required information and configured your OIDC account, email the details to Parloa at or reach out to your Customer Success Manager (CSM) for further assistance in setting up IDP authentication with your Parloa account.
Each Dialog version in Parloa is an iteration of your project with its own unique configuration, dataset, and performance. As you craft and refine your dialogue, you work in 'Draft' mode. To safeguard your progress or solidify a stable iteration, click the button. This allows you to work on different versions of the same project simultaneously without affecting each other.
You can define your Version for your under Deployments when creating a new release or when editing an existing release by clicking on .
Click the button, displayed at the top-right of your screen. The following displays:
Note: The offers a streamlined approach to bot development by providing a ready-to-use architecture.
Click or press Enter.
Click the button that is displayed.
Click the button, shown below:
Drag and drop an exported zip file of the project into the pop-up window.:
Click the button.
Upon opening the newly imported project, it will always start in the set starting language. To view the rest of the associated languages, click on the languages to display the dropdown menu of languages. For example:
Click the button that is displayed.
To view your projects, go to the homepage by clicking on the icon, displayed on the top-left of your screen. Your projects list is displayed under the Projects field.
Search for a project by typing in the search bar, displayed on the top-right of your screen.
Hover over a project row, and click the button. A pop-up window displays.
Type the project's name to confirm you want to delete the project:
Click .
For more insight, please check the !
Now Available: for Project Edits.
Introducing the for PII Data Anonymization on the Training Screen.
Now Available: for User Management.
A type of cryptographic security feature that uses a secret key and a hash function to ensure data integrity and authenticity (such as ).
NLU refers to a model that deduces semantic meaning from texts by applying a set of rules it has 'learned.' First, we employ Natural Language Processing () techniques to dissect and prepare the text. These processed texts are then utilized to train a model; this model, the NLU, is subsequently tasked with interpreting entirely new texts and determining their underlying significance.
In Parloa, managing user input is crucial for creating an engaging and responsive conversational AI experience. This process involves interpreting the caller's input and determining the appropriate response. Initially, the system matches the user's input with predefined intents, guiding the conversation.
However, callers might provide unexpected or unaccounted input, such as off-topic questions or unclear statements. In these cases, the behavior is represented as the 'else' case in the conversational flowchart.
Handling these instances gracefully maintains a smooth dialogue and assists the caller. Parloa offers mechanisms for addressing unexpected user input both locally, within a specific conversation context, and globally, across the entire project. This approach ensures conversational AI can navigate unforeseen inputs, keeping interactions on track and users engaged.
When a project lacks an assigned language, you'll be prompted to choose a starting language upon project launch.
To set a project's language:
Open an existing project.
Click on the Starting Language dropdown menu.
Select your desired language.
Click Add Language button.
You can add an additional language to your project by following the steps below:
Click on the language dropdown menu:
Choose the language you want to add to your project (for instance, English (en-US)
).
Click the Add button to incorporate the selected language into your project.
The project's graph structure remains consistent across languages, indicating that the overarching structure does not vary with different languages. However, it is crucial to understand the consequences of elements that are not translated:
Untranslated Response – If a response is not translated, it will result in no output during a conversation.
Untranslated Text Snippet – If a text snippet is not translated, it can lead to a broken output, potentially disrupting user interactions.
Untranslated Intent/Utterances – If intents or utterances are not translated, they compromise the speech model, leading to inefficient or erroneous recognition.
However, the Response content must be translated:
Changing your bot's language impacts the Intents and Slots sections.
When you change the language of the dialog, the Name field remains consistent across all languages.
The Utterances field, on the other hand, will be empty and will require you to fill it with translated content.
When you change the language of the dialog, the Name field remains consistent across all languages.
The Values field, on the other hand, will be empty and will require you to fill it with translated content.
Parloa: "Hello, welcome to our room booking system. Would you like to book Batman, Hellboy, or Wolverine?"
User: "Superman."
Parloa: "Sorry, we only have actual superhero rooms available. Batman, Hellboy, or Wolverine?"
If the user's input doesn't match any options (Batman
, Hellboy
, Wolverine
), the else
case activates, triggering the unhandledRoom response locally. This approach allows for specific responses unique to the current state.
Parloa supports real-time caller interruptions, enhancing interaction fluidity:
DTMF (Dual-Tone Multi-Frequency) Inputs: Users can interrupt the bot using DTMF inputs, essential for correcting misunderstandings or navigating menus.
Voice-Based Interruptions: Configurable to allow interruptions during the bot's speech within a set timeframe. By default, this is disabled (0 seconds), requiring activation for use.
These features ensure enhanced control over conversations, offering a smoother user experience.
Parloa: Executes a prompt based on the previously active state, using a context variable - "Sorry, I did not understand you."
To customize the handling of else
cases and global intents in specific states, activate the Settings switch in the Intents tab. This action prioritizes local handling over global settings:
If local else
cases are defined without activating the toggle, dialogue reverts to the Start Conversation block's entry point.
Activating this feature allows the dialogue to continue to the next node, regardless of user input, bypassing global else
cases.
When callers respond with silence, Parloa allows for a tailored approach to prompt them again, ensuring the conversation continues smoothly. This silent response can be managed effectively by configuring a reprompt strategy.
A reprompt is designed to gently nudge the user back into the conversation without repeating the entire previous message. Instead, it succinctly restates the essential question or information needed to proceed. For instance:
Parloa: "Hello, welcome to our room booking system. Would you like to book Batman, Hellboy, or Wolverine?" User: silence Parloa: "Would you like to book Batman, Hellboy, or Wolverine?"
Timing: If there is no user input, Parloa waits for a default period of 10 seconds before triggering a reprompt. This duration ensures users have ample time to respond but also keeps the conversation flowing.
Repetition: By default, if continuous silence persists, the reprompt will be repeated up to 10 times before the system decides to end the call. This repetition strategy is designed to balance persistence with user experience, preventing frustration from either party.
If a specific reprompt is not defined, Parloa will automatically reuse the last prompt. This default behavior ensures the conversation attempts to continue even in the absence of custom settings. However, customizing the reprompt experience is highly recommended to tailor the interaction to the specific context of your conversational AI application.
Information on the range of languages supported by Parloa and how to implement multi-lingual support in your projects.
The following outlines the language support available on the following platforms:
Phone V2
Textchat V2
Parloa provides different levels of language support, tailored to each platform:
Enhanced Support – This level represents Parloa's optimization for specific languages, ensuring greater accuracy and enhanced performance.
Baseline Support – This is the foundational language support that relies on the Azure Speech service. It offers basic language recognition capabilities but may not include all the optimizations present in the Enhanced Support.
Baseline Support – The Textchat platform utilizes the foundational Baseline Support for dependable language recognition.
All blocks within the Conversation Logic category share a common component – Responses. The following documentation details the Responses tab, applicable to all blocks in the Conversation Logic category.
The currently selected response. This represents the currently selected response. It contains the messages that will be communicated to the caller.
The current selected response element: SSML, which makes the chatbot sound more human. The available response elements include:
SSML (Speech Synthesis Markup Language): Enhances the chatbot's vocal responses for a more natural sound.
Text: Allows the chatbot to send text responses.
Cross-Platform:
SSML & Text: Enables the chatbot to respond with both voice and text.
JSON: Transforms the cross-platform response into a JSON element. This structured data format is readable yet irreversible when converted.
The + Add prompt button enables you to insert additional SSML prompts. The chatbot will select one at random if multiple prompts are available.
The Generate More Prompts button enables you to create multiple AI-generated prompt suggestions.
Define what the chatbot should say to the caller if there is no response within a designated timeout period (typically 5 seconds).
This is a predefined response that the chatbot uses when it cannot understand the caller’s input.
The + button enables you to add more responses.
Start Conversation block – When a conversation begins, typically through an incoming call, the LaunchRequest intent is triggered to deliver your welcome message and set the stage for the interaction.
State block – The Intents tab in the State block is used to create and manage intents that determine how a conversation progresses. Each intent can lead to different paths or outcomes in a conversation.
Choose your preferred method for recognizing intents:
Utterances: Leverage predefined phrases to identify user intentions using Natural Language Understanding (NLU).
Descriptions: Employ a Large Language Model (LLM) for enhanced intent detection that interprets descriptive text along with NLU.
Speech to Text (STT): This field allows you to select a Speech-to-Text model tailored for specific input types, such as alphanumeric input.
The available specialized models are:
The Local no-match logic how your chatbot handles unrecognized intents, either by referring to a local else
intent or deferring to a global fallback.
The else
serves as the fallback intent, activated when an intent isn't recognized within the LaunchRequest
intent's list.
Profile
View and edit your account details in your Parloa profile.
Click on the to reveal additional language selection options. You will see a list of available languages:
In the and block and all blocks incorporating the Response element structure—including and —the Response and Intent elements remain the same.
Certain blocks, such as and speech, are not designed for translation. For these cases, you can use a instead.
The Name field remains the same, but the Text Pool field can be translated:
Parloa's , , and blocks include an else
case by default to manage unexpected inputs. Consider the example below:
For a flexible response to unexpected inputs across all states, combine a block with a block. This method enables a generic, yet context-sensitive response:
Responses enable you to define what the communicates to the caller.
The prompt plays the text defined in the Authenticated
, followed by the static text.
The and blocks within the Conversation Logic category share a common component – Intents. The following documentation details the Intents tab.
Intents are user expressions or phrases that prompt specific responses from your . They can be as straightforward as "Order a pizza" to initiate a pizza ordering dialog. Intents bridge the gap between user requests and your chatbot's ability to understand and respond appropriately.
The specialized model Given Name (Hints) should only be used in conjunction with functionality for selected conversational turns. It is not recommended to use this model throughout the entire conversation.
English (United States)
en-US
German (Germany)
de-DE
Afrikaans (South Africa)
af-ZA
Arabic (UAE)
ar-AE
Arabic (Bahrain)
ar-BH
Arabic (Algeria)
ar-DZ
Arabic (Egypt)
ar-EG
Arabic (Iraq)
ar-IQ
Arabic (Jordan)
ar-JO
Arabic (Kuwait)
ar-KW
Arabic (Lebanon)
ar-LB
Arabic (Libya)
ar-LY
Arabic (Morocco)
ar-MA
Arabic (Oman)
ar-OM
Arabic (Qatar)
ar-QA
Arabic (Saudi Arabia)
ar-SA
Arabic (Syria)
ar-SY
Arabic (Tunisia)
ar-TN
Arabic (Yemen)
ar-YE
Bulgarian (Bulgarian)
bg-BG
Bengali (India)
bn-IN
Catalan (Spain)
ca-ES
Chinese Cantonese (Hong Kong)
zh-HK
Chinese Mandarin (China)
zh-CN
Chinese Mandarin (Taiwan)
zh-TW
Czech (Czech)
cs-CZ
Danish (Denmark)
da-DK
German (Austria)
de-AT
German (Switzerland)
de-CH
Greek (Greece)
el-GB
English (Australia)
en-AU
English (Canada)
en-CA
English (United Kingdom)
en-GB
English (Hong Kong)
en-HK
English (Ireland)
en-IE
English (India)
en-IN
English (Kenya)
en-KE
English (Nigeria)
en-NG
English (New Zealand)
en-NZ
English (Philippines)
en-PH
English (Singapore)
en-SG
English (Tanzania)
en-TZ
English (South Africa)
en-ZA
Spanish (Argentina)
es-AR
Spanish (Bolivia)
es-BO
Spanish (Chile)
es-CL
Spanish (Colombia)
es-CO
Spanish (Costa Rica)
es-CR
Spanish (Cuba)
es-CU
Spanish (Dominican Republic)
es-DO
Spanish (Ecuador)
es-EC
Spanish (Spain)
es-ES
Spanish (Equatorial Guinea)
es-GQ
Spanish (Guatemala)
es-GT
Spanish (Honduras)
es-HN
Spanish (Mexico)
es-MX
Spanish (Nicaragua)
es-NI
Spanish (Panama)
es-PA
Spanish (Peru)
es-PE
Spanish (Puerto Rico)
es-PR
Spanish (Paraguay)
es-PY
Spanish (El Salvador)
es-SV
Spanish (United States)
es-US
Spanish (Uruguay)
es-UY
Spanish (Venezuela)
es-VE
Estonian (Estonia)
et-EE
Persian (Iran)
fa-IR
Finnish (Finland)
fi-FI
French (Belgium)
fr-BE
French (Canada)
fr-CA
French (Switzerland)
fr-CH
French (France)
fr-FR
Irish (Ireland)
ga-IE
Hebrew (Israel)
he-IL
Hindi (India)
hi-IN
Croatian (Croatia)
hr-HR
Hungarian (Hungary)
hu-HU
Indonesian (Indonesia)
id-ID
Icelandic (Iceland)
is-IS
Italian (Italy)
it-IT
Japanese (Japan)
ja-JP
Georgian (Georgia)
ka-GE
Khmer (Cambodia)
km-KH
Korean (Korea)
ko-KR
Malayalam (India)
ml-IN
Mongolian (Mongolia)
mn-MN
Burmese (Myanmar)
my-MM
Norwegian-Bokmål (Norway)
nb-NO
Nepali (Nepal)
ne-NP
Dutch (Belgium)
nl-BE
Dutch (Netherlands)
nl-NL
Polish (Poland)
pl-PL
Portuguese (Brazil)
pt-BR
Portuguese (Portugal)
pt-PT
Romanian (Romania)
ro-RO
Russian (Russia)
ru-RU
Slovak (Slovakia)
sk-SK
Swedish (Sweden)
sv-SE
Swahili (Kenya)
sw-KE
Swahili (Tanzania)
sw-TZ
Tamil (India)
ta-IN
Telugu (India)
te-IN
Thai (Thailand)
th-TH
Turkish (Turkey)
tr-TR
Vietnamese (Vietnam)
vi-VN
Afrikaans (South Africa)
af-ZA
Arabic (Algeria)
ar-DZ
Arabic (Bahrain)
ar-BH
Arabic (Egypt)
ar-EG
Arabic (Iraq)
ar-IQ
Arabic (Israel)
ar-IL
Arabic (Jordan)
ar-JO
Arabic (Kuwait)
ar-KW
Arabic (Lebanon)
ar-LB
Arabic (Libya)
ar-LY
Arabic (Morocco)
ar-MA
Arabic (Oman)
ar-OM
Arabic (Palestinian Territories)
ar-PS
Arabic (Qatar)
ar-QA
Arabic (Saudi Arabia)
ar-SA
Arabic (Syria)
ar-SY
Arabic (Tunisia)
ar-TN
Arabic (UAE)
ar-AE
Arabic (Yemen)
ar-YE
Bengali (India)
bn-IN
Bulgarian (Bulgarian)
bg-BG
Burmese (Myanmar)
my-MM
Catalan (Spain)
ca-ES
Chinese Simplified (China)
zh-CN
Chinese Traditional (Hong Kong)
zh-HK
Chinese Traditional (Taiwan)
zh-TW
Croatian (Croatia)
hr-HR
Czech (Czech)
cs-CZ
Danish (Denmark)
da-DK
Dutch (Belgium)
nl-BE
Dutch (Netherlands)
nl-NL
English (Australia)
en-AU
English (Canada)
en-CA
English (Ghana)
en-CA
English (Hong Kong)
en-HK
English (India)
en-IN
English (Ireland)
en-IE
English (Kenya)
en-KE
English (New Zealand)
en-NZ
English (Nigeria)
en-NG
English (Philippines)
en-PH
English (Singapore)
en-SG
English (South Africa)
en-ZA
English (Tanzania)
en-TZ
English (United Kingdom)
en-GB
Estonian (Estonia)
et-EE
Finnish (Finland)
fi-FI
French (Belgium)
fr-BE
French (Canada)
fr-CA
French (France)
fr-FR
French (Switzerland)
fr-CH
Georgian (Georgia)
ka-GE
German (Austria)
de-AT
German (Switzerland)
de-CH
Greek (Greece)
el-GB
Hebrew (Israel)
he-IL
Hindi (India)
hi-IN
Hungarian (Hungary)
hu-HU
Icelandic (Iceland)
is-IS
Indonesian (Indonesia)
id-ID
Irish (Ireland)
ga-IE
Italian (Italy)
it-IT
Italian (Switzerland)
it-CH
Japanese (Japan)
ja-JP
Khmer (Cambodia)
km-KH
Korean (Korea)
ko-KR
Malayalam (India)
ml-IN
Mongolian (Mongolia)
mn-MN
Nepali (Nepal)
ne-NP
Norwegian-Bokmål (Norway)
nb-NO
Persian (Iran)
fa-IR
Polish (Poland)
pl-PL
Portuguese (Brazil)
pt-BR
Portuguese (Portugal)
pt-PT
Romanian (Romania)
ro-RO
Russian (Russia)
ru-RU
Slovak (Slovakia)
sk-SK
Spanish (Argentina)
es-AR
Spanish (Bolivia)
es-BO
Spanish (Chile)
es-CL
Spanish (Colombia)
es-CO
Spanish (Costa Rica)
es-CR
Spanish (Cuba)
es-CU
Spanish (Dominican Republic)
es-DO
Spanish (Ecuador)
es-EC
Spanish (El Salvador)
es-SV
Spanish (Equatorial Guinea)
es-GQ
Spanish (Guatemala)
es-GT
Spanish (Honduras)
es-HN
Spanish (Mexico)
es-MX
Spanish (Nicaragua)
es-NI
Spanish (Panama)
es-PA
Spanish (Paraguay)
es-PY
Spanish (Peru)
es-PE
Spanish (Puerto Rico)
es-PR
Spanish (Spain)
es-ES
Spanish (United States)
es-US
Spanish (Uruguay)
es-UY
Spanish (Venezuela)
es-VE
Swahili (Kenya)
sw-KE
Swahili (Tanzania)
sw-TZ
Swedish (Sweden)
sv-SE
Tamil (India)
ta-IN
Telugu (India)
te-IN
Thai (Thailand)
th-TH
Turkish (Turkey)
tr-TR
Vietnamese (Vietnam)
vi-VN
Alphanumeric
✅
✅
Date
✅
❌
License Plate
✅
✅
Number
✅
✅
Ordinal
✅
❌
Spelling
✅
✅
Given name (hints)
✅
❌
Blocks
Text Snippets
Speech Assets
Variables
Services
Debugger
Navigating Unexpected Inputs to Maintain Conversation Continuity
The To Previous State block enables you to manage unexpected user inputs effectively. By allowing the conversation to revert to the last interaction, it offers users a chance to rephrase or provide new input without disrupting the flow of the dialog.
State Restoration: When invoked, this block reverts the conversation to the previous state. This is particularly useful when the caller has provided an input that the system cannot understand or categorize within the expected responses.
User-Centric Approach: The block is equipped to prompt the caller to rephrase or provide new input, maintaining a user-centric conversation without causing frustration or confusion.
The block can contains a list of response pathways such as "Repeat Prompt", "Bad Mood" and "Wait", which can be configured to handle various scenarios that necessitate reverting to a previous state.
Configure the specific responses tailored to different scenarios that might require the conversation to roll back to a previous state.
Integrate this block within your conversational flow where there is potential for unclear or unexpected responses.
Ensure that transitions into and out of the "To Previous State" block are smooth and logical within the conversation flow.
Your Dialog's Global Behavior
on error
– Activated when an error is detected in the conversation flow, guiding you towards a resolution.
else
– Serves as a catch-all for any utterances that do not match other specified intents, ensuring that you or the caller remain engaged with meaningful responses.
The global 'else' intent provides a general response, whereas a local 'else' offers context-specific interaction. Below is an example of a local 'else' intent within a State block:
Your Dialog's Exit Point(s)
The End Conversation block serves as the definitive point for the conclusion of a dialog. Its purpose is to facilitate a natural and logical closure to user interactions, ensuring a pleasant and professional end to the session. The capability to have multiple "End Conversation" blocks within a single dialog allows for various exit points, accommodating different conversational paths.
Provides a structured and natural endpoint to the dialog.
Enables multiple exit points to handle various conversation outcomes.
Ensures a consistent and professional end-user experience.
This represents the currently selected response. It contains the closing messages that will be communicated to the caller.
Determine how the chatbot will communicate with you –
SSML (Speech Synthesis Markup Language) – Enhances the chatbot's voice to sound more natural when speaking to you.
Text – Enables the chatbot to reply to you with text.
Cross Platform – Allows the chatbot to respond to you in both voice and text, suitable for various platforms.
JSON (JavaScript Object Notation) – Transforms the cross-platform response into JSON format, efficient for data representation. Please note that once converted to JSON, this cannot be undone.
You can specify how the chatbot should react if it does not receive a response from you within a 10-second window.
Your Dialog's Entry Point
The Start Conversation block marks the start of your dialog. This default block is automatically included in each new dialog as a fixed starting point for interactions and, importantly, it cannot be copied or deleted..
The Start Conversation block is structured into two main sections:
Responses: Contains predefined responses that the system can use to greet callers or handle initial queries.
Intents: Outlines the various user intents that can be recognized at the start of the conversation, determining the direction the dialog will take.
Within the 'Start Conversation' block, the system is prepared to handle:
Standard Greetings: Responses like "Welcome" are set to initiate the conversation with a friendly opening message.
DTMF Responses: For touch-tone navigation, the system can recognize DTMF tones and provide the appropriate response.
Error Handling: Responses like "Excuse me?" are used when the caller's intent is not clearly understood.
Affirmative and Negative Responses: The system can recognize and differentiate between affirmative responses like "Yes" or negative ones like "No".
Each response is configurable with elements such as:
JSON Payload: For more complex interactions, you can define responses with JSON structures to control the conversation flow dynamically.
The system uses the provided utterances to detect caller intents. You have the ability to:
Add New Intents: By selecting the '+' sign, you can define new intents and associate them with specific responses.
Edit Existing Intents: Click on an intent to modify its utterances or linked actions.
Local Logic and Global Intents: The system can be configured to handle 'else' intents either locally or globally. Local handling offers context-specific interaction, while global handling provides a standard response across various contexts.
A standard 'Welcome' response is pre-configured in the Start Conversation block, linked to the LaunchRequest
intent. For example:
Parloa: "Hello, and welcome to BookEase! How may I help you?"
If importing a subdialog includes a service with a name already existing in your project, it will be automatically renamed to differentiate, along with the naming you provided it.
Imported services include branches, service input, and output but exclude technical configuration. For example:
If you select languages that are not currently present in the target project, they will be automatically added to your project.
Introductory information for new users to understand the foundational elements of Parloa.
Crafting Responses and Setting Up Intent-Driven User Interactions
The State block is a fundamental unit of dialog within Parloa. It is designed to handle specific topics or user queries, such as asking for an address. State blocks interpret user input, define responses, and manage the flow of conversation based on the user's intent.
Response Definition: Define the chatbot’s verbal and text responses to users.
Intent Recognition: Configure specific user intents that the chatbot should recognize at different points in the conversation.
Use the Responses tab to input the chatbot's verbal and textual communication. Start by crafting a primary prompt that is clear and direct, effectively asking for the user's address.
Single and Multiple Prompts: If only one prompt is placed above the dividing line and no prompts are placed below it, this prompt will be used by default in interactions. However, if multiple prompts are placed below the line, they are activated sequentially in the same call (or in different calls from the same caller) when triggered multiple times, ensuring a dynamic conversation flow.
SSML Integration: Incorporate Speech Synthesis Markup Language (SSML) to make the bot sound more natural and engaging. This can significantly improve the user's experience by providing a voice that is more relatable and easier to understand.
Handling Varied Scenarios: Add additional prompts to manage different scenarios, such as lack of user input or misunderstandings. This ensures the chatbot can handle unexpected user responses gracefully.
Response Priority: When both cross-platform and platform-specific responses are used, the platform-specific response takes precedence, allowing for a tailored user experience depending on the platform used.
In the Intents tab, define the intents the chatbot should recognize. For example, in an “Ask for address” State block, you can include intents such as tellAddress
, provideAddress
, and fallback intents like else
or misunderstood
.
Each intent should account for a variety of possible user utterances to ensure comprehensive recognition. For example:
"yes" intent: Accept variations like "yeah," "yep," and "yes."
"no" intent: Accept responses such as "no" and "nope."
The Graph tab in the dialog window provides a graphical representation of a dialog structure, composed of blocks and the connections between them. It is a flowchart that outlines a customer service dialog, and is created by dragging and dropping blocks onto the graph, configuring them, and connecting them in a way that reflects the logic of the conversation.
Parloa currently provides the following types of blocks. Each of these blocks has an input and an output, which can be connected to one or more other blocks based on the logical design of your dialog:
One of the key features is variable previousPrompt
where the AI can repeat the last prompt, allowing the caller to understand the context and respond appropriately.
The Global block handles inputs from callers that are applicable at any point in the conversation, irrespective of the current state of the dialog. It is composed of several predefined that can be triggered by . Each intent corresponds to a specific action or response by the conversational AI. For example:
When the same intent is defined both locally (in a specific block) and globally, the local definition is prioritized. This prioritization ensures that responses are contextually appropriate, enhancing interaction relevance.
The prompt begins with a greet
, followed by company-specific information embedded within static spoken text. This provides additional context or information to the user.
Complex Requests: For more nuanced requests, such as "ChangeAddress" or "ReportDamage", the system can discern and route the caller to the appropriate service path (which are ).
SSML & Text: Customize the spoken response using Speech Synthesis Markup Language () for natural expression.
The bot then awaits your response to continue the conversation, either transitioning to a or concluding the interaction.
The Start Conversation block functions identically to a State block in all other respects. For further details, refer to the block documentation.
Subdialogs can encapsulate a broad range of elements, including , , , and more. By exporting and importing subdialogs, you can easily reuse established dialog structures, saving time and ensuring consistency across projects.
Autocomplete Features: Utilize our features within the SSML section to streamline your response creation process. These features can help generate high-quality responses efficiently.
The no_input
intent can be enabled or disabled for specific State and blocks, providing flexibility in how silent scenarios are handled across the dialog.
Click the icon, displayed to the right of the block name.
Enhancing Conversational Continuity
The Continue Listening (CL) block is designed to maintain the natural rhythm of your conversation. Its primary function is to determine when a caller has finished speaking and to remain attentive for any additional speech. This ensures that the conversation can continue seamlessly if the caller pauses mid-sentence or decides to add more to their statement.
End-of-Speech Detection: The CL block utilizes an advanced detection system that identifies when the caller has stopped talking.
Response to Additional Speech: If the caller resumes speaking, the system acknowledges this continuation and maintains the conversation without any disruption.
The following example diagram illustrates the architecture of the CL block's connection to the rest of the graph:
A pane on the right will display, enabling you to select the slot type you wish to add.
Define a Regex expression by clicking the + button.
Create the input condition using Regex syntax.
The regular expression (\d\w)+
is designed to capture character sequences comprising a digit followed by a word character. The syntax of this regex is as follows:
\d
matches any digit (0-9).
\w
matches any word character, including letters, digits, and underscores.
+
asserts that the preceding element should be matched one or more times.
Implement a condition using the regex, for instance, !slots.tellSlot || slots.tellSlot.length < 11
, to check if the user input matches certain criteria.
Position a 'Continue Listening' block after the condition block to ensure the system stays in listening mode for any additional user speech.
Use the 'Continue Listening' block judiciously to avoid unnecessarily extending the conversation, which could potentially lead to caller frustration.
Enhancing Caller Engagement during Processing Time
The Intermediate Response block is designed to enhance caller engagement during processing delays, particularly when interfacing with third-party APIs. This feature is designed to fill silent gaps during call processing, ensuring callers are consistently informed about the status of their requests.
This block acts as a placeholder in the dialog flow, providing brief, situational messages to inform the caller that their request is being processed. By default, Intermediate Responses are always played to ensure continuous communication and compliance with requirements like GDPR data privacy notices. Intermediate responses can only be used after the LaunchRequest
has been processed successfully.
Responses are organized within a system queue and will play in the order they are stacked in the bot. The first response ready is the first played. If the conditional Play logic toggle is on, intermediate responses may be skipped if the final response is available. However, if the final response is not available, the intermediate responses will still be played.
There are two main types of responses:
Intermediate Response (IR): Non-final messages that are queued to play during the call's active processing time. These are always played by default to maintain engagement and meet compliance standards.
Final Response (FR): The definitive answer or solution provided to the caller, marking the end of the interaction.
When incorporating Intermediate Response blocks into dialog flows, it's essential to consider their new default behavior and the Conditional Play option:
Typical Scenario: By default, all Intermediate Responses are played before the Final Response, regardless of whether the Final Response is ready.
Conditional Play: If the Conditional Play toggle is enabled on an Intermediate Response, this response may be skipped if the Final Response is ready, thus speeding up the interaction while potentially bypassing non-critical messages.
Default Play: Intermediate Responses are configured to always play by default to avoid inadvertently skipping important information, such as compliance-related notices.
Conditional Playback: The Conditional Play toggle provides flexibility, allowing these responses to be skipped when the Final Response is ready. This should be used judiciously to balance efficiency with the need to communicate essential information.
Expectation Management: These responses play a critical role in managing caller expectations during wait times and are integral to maintaining a positive caller experience.
For bot builders:
Ensure that any use of the Conditional Play toggle is in line with the importance of the messages being potentially skipped. For example, critical notices required for legal compliance should not have the toggle enabled to guarantee their delivery.
For call recipients:
Expect that all necessary information will be conveyed without gaps, regardless of the processing speed of subsequent service blocks, unless the Conditional Play has been specifically activated.
Integrating with External Services for Data Exchange and Enhanced Functionality
A Service Block enables you to integrate external services or APIs into your dialog flow, enhancing functionality. This integration enables you to retrieve data, perform actions, or interact with external systems during a conversation, ultimately enriching the user experience and expanding Parloa's ability to handle complex inputs and interface with external systems.
The following describes the two tabs displayed above:
Each Service block must contain at least one branch to dictate the flow based on the response from the external service. In the provided example, there are two branches:
success
: This branch is followed if the external service returns a successful response.
else
: This branch is followed if the external service does not return a successful response.
The Variables tab is crucial for defining and managing the data exchanged with the external service or API. It consists of two parts:
Service Input: Maps local variables to the input expected by the service.
Service Output: Maps the service's response to local variables.
In the provided example:
The utterance
variable captures the input for spelling processing.
The language
variable is preset to 'en', designating English for the spelling service.
The letters
and numbers
variables delineate the data types – alphabetic or numeric – the spelling service should expect.
The Service Output section shows how the results (stored in the result
variable) are mapped back to the utterance
variable within the dialog flow, enabling the use of processed data in subsequent interactions.
The Service Output section indicates how the results from the spelling service (result
) are mapped back to a variable (utterance
) within the conversational flow. This allows the processed data to be used in subsequent interactions with the user.
Directing Dialog Flow Based on User Inputs
The Condition block enables you to craft dynamic, context-aware, and adaptive dialog flows by incorporating specific conditions or rules. This enables the conversation's direction to be dictated by evaluating conditions based on user input, data from external systems, or other criteria.
By incorporating the Condition block into your dialog, you can personalize the conversation for individual users or specific scenarios, resulting in a more engaging and tailored experience. Furthermore, the Condition block equips you with the ability to examine its contents and trigger other actions in the call flow based on the evaluated conditions.
Incorporating a Condition block into your dialog allows for personalizing conversations to individual users or specific scenarios, creating a more engaging and customized experience. Additionally, the Condition block provides the capability to inspect its contents and initiate other actions within the call flow, contingent on the assessed conditions.
The Condition block is a pivotal block in crafting dialog logic, often leading to branching conversations. In the example below, two conditions, namely "coffee" and "soft drinks," are employed, but you can add more conditions as needed.
Depending on the user's response to the question, "Do you need coffee or soft drinks?" different actions are initiated. Parloa evaluates which defined options are true before advancing to the next node in the dialog, similar to how text blocks use conditions. In this scenario, the dialog proceeds to various Service blocks.
Seamless Call Transition to Human Agents with Forwarding and SIP REFER
The Call Control block enables you to manage the direction and flow of your calls with precision. It allows you to steer the call to different destinations, whether it's transferring to another phone number or a SIP trunk. By utilizing the Call Control block, you can ensure calls are routed efficiently and handled according to your conversation design.
The Call Control block is designed exclusively for use on the Phone2 platform. Access from other platforms is not supported.
Upon clicking the Call Control block, you'll be presented with several configuration options:
Action Sequence: Create a sequence of actions, such as dialing numbers or initiating a SIP REFER, which the system will execute in the specified order.
When setting up your initial Action Sequence, choose from the following action types:
SIP Refer: Select this for transferring calls via SIP URIs or URIs from Storage or Environment variables, after which the connection is released.
To maintain clarity and functionality, each Call Control block must contain only one type of action sequence.
Refining Your Conversation Flow
The Subdialog block enables you to to create and manage reusable sections of a conversation. They're like building blocks that you can insert anywhere in your chat flow, perfect for repeated tasks such as updating your address.
Here's an example of a Subdialog block named Change Address:
Establish the subdialog block, identifying the specific task it will handle—such as "Change Address"—and set up the entrance and exit points accordingly.
Position your subdialog within your chatbot's conversational flow where an address change might be necessary, ensuring it's accessible from relevant points in the dialogue.
Configure successful (✅) and unsuccessful (❌) exits from the subdialog to dictate subsequent actions based on user interaction outcomes.
In the Subdialogs tab, you'll find a list of all your created Subdialogs. This list shows which Subdialogs are currently in use ("Connected") and which are not yet part of the dialog flow ("Not connected"), under the Status tab. Here you can add a new Subdialog to your chatbos, as follows –
Click the + Add Subdialog button –
In the pop-up window, enter a name in the New Subdialog Name field:
Click Create.
Continue to the next page to learn how to repurpose your subdialogs across various projects.
Adding, editing and deleting utterances
Utterances in Parloa fall into two main categories:
Generic Utterances: These are common expressions applicable across a variety of scenarios, which provide the chatbot with foundational comprehension of widespread caller requests.
Specific Utterances: These are detailed phrases specifically aligned with particular intents. They allow the bot to interpret and fulfill highly targeted caller queries with greater precision.
Go to Dialog → Speech Assets → Intents. This displays a list of intents and their linked utterances.
Click on an intent. A pane on the right displays:
Click the Utterances tab, and add at least one utterance, to enable the AI utterance generation (in the Description tab).
Scroll down and click the Generate More Utterances button:
In the bulk editor, input a description and click Generate Utterances to directly insert new utterances into the list.
Speech assets are essential collections of phrases or sentences anticipated during interactions with Parloa. These assets are categorized into two key components: intents and slots, which together facilitate a seamless dialog between you and the bot.
Given the dynamic nature of language and expressions, it is imperative to regularly update your speech assets. This maintenance process entails continuous evaluation and adjustments, drawing from actual interactions and evolving language patterns. Keeping your speech assets refined ensures Parloa's sustained proficiency in understanding and fulfilling your needs.
Keep Track of Information with Storage
The Storage block is designed to store and manage data dynamically throughout a conversation. This block allows you to set variables that can be utilized across different stages of the interaction, enabling a personalized and context-aware experience.
The following describes the Set
and To
fields:
The Set field represents the variable name to which you want to assign a value. The variable name should be descriptive of the data it holds for easy reference.
The To field enables you to define the value assigned to the variable. The value can be of different types such as Text, Number, or a JavaScript expression.
Storing a Caller's Input – If you want to save the reason for a caller's inquiry, you might set a variable named InquiryReason
to the input collected from the caller.
Dynamic Data Capture – During an interaction where a technical error is identified, you can set a variable named EscalationReason
to "A technical error occurred." This information can be used later to provide a summary to an agent or to guide the conversation flow.
Add Comments to Your Dialog
The Note block serves as a valuable tool for enhancing your conversation design by incorporating annotations, comments, or contextual information. By integrating Note blocks into your dialog flow, you can provide essential context, instructions, or explanations to your colleagues working on the project.
Note blocks play a crucial role in ensuring that your colleagues comprehend your design choices, intentions, or any pertinent details they should be aware of during the project's development.
Timer Initiation: Upon detecting the , the CL block starts a timer, remaining alert for any further caller input.
A user intent, such as tellSlot
, is linked to the block or a block within the conversation. Link a User Intent, such as tellSlot
to the Start Conversation or a specific State block.
Add a new slot for capturing user input by going to -> -> Add New Slot.
When you click on the Service block, a pane opens on the right, displaying two tabs: and .
Parloa services typically do not distinguish between different types of service errors. An exception is the Parloa , which is documented separately.
To learn how to add a Service, click here to refer to our documentation.
For more detailed information on crafting conditions within the Condition block, you can refer to our page.
Moreover, the Call Control block is a versatile tool, acting as an alternative to the block. You can use it to seamlessly transition callers to a live agent, providing a more personalized experience compared to ending the call directly.
Speech Before Action: Improve caller experience by playing a message (using ) before the call is redirected, informing callers about the upcoming transfer.
Forward Call: Use this option to dial phone numbers directly or use numbers from or , keeping the original call connection active.
In a SIP Refer Action Sequence, you can specify multiple SIP Invite URIs or reference multiple values from and variables. This flexibility allows Parloa to direct a call to various destinations or dynamically select a forwarding target based on information stored in Storage or Environment variables. Adding additional data to the SIP Header offers enhanced control over the call-forwarding process.
Within a Forward Call Action Sequence, you have the option to list multiple phone numbers or reference values from or variables. This feature supports dynamic call forwarding, ensuring a seamless caller experience by adapting to stored information or predefined variables.
Utilize the "Change Address" subdialog across different scenarios within your where address updating is needed, and continuously refine it for better performance.
Utterances are the varied phrases or words of input, reflecting intentions or commands. Within Parloa, these utterances are mapped to corresponding to trigger the appropriate responses from the bot. The depth of the bot's understanding is directly tied to the diversity and richness of its utterance database – the more utterances the system recognizes.
The tab, shown above, lists all potential user goals, labeled as Intents. Each intent corresponds to a distinct action or request, like signaling a problem or seeking support.
The tab is where you can specify the information required from the user's input, such as personal identifiers, temporal data, or geographic locations.
The tab houses a vocabulary pertinent to your bot's area of service, enhancing the recognition of user intents.
For further details on managing and optimizing your speech assets, you can explore the , , and tabs.
Understanding User Intentions
Within the Speech Assets, the Intents window is your central hub for managing user intentions.
The Intents tab lists all potential user goals, labeled as Intents. Each intent corresponds to a distinct action or request, like signaling a problem or seeking support.
Here are common actions you can perform with your intents:
Parloa processes potential intents through a hierarchical matching system:
Initial attempt is to match with locally defined intents.
If unsuccessful, search within the Global State for a possible match.
Should that fail, refer to the local 'else' pathway.
Absent a local 'else', employ the Global State 'else' pathway.
For advanced scenarios, use intent.rankingAll to evaluate recognized intents against custom thresholds.
When managing intents for multiple platforms, you can customize them to fit each platform's specific requirements without needing to create new intents from scratch.
Intent name.
Current selected platform tab – customize settings for each platform here.
Group assignment – Add intent to a group by clicking the Not Grouped button, as described here.
Platform availability toggle – If off, the intent is excluded from the respective platform's speech model.
Confidence threshold adjustment – Default is set to 30%.
Note: This threshold does not apply to description-based intent classification, since that approach relies on natural language descriptions rather than confidence scores.
Fallback Intent trigger – This feature is crucial for managing situations where the confidence threshold for an intent recognition is not met.
Toggle Off: If the toggle button is turned off for an intent, it will be excluded from the respective platform's speech model, rendering it inactive and unable to trigger under any conditions.
No Fallback Selected: In cases where no Fallback Intent is actively toggled on any intent, Parloa defaults to using its built-in PARLOA.Fallback
Intent. This ensures that there's always a safety net for unrecognized utterances, maintaining the flow of conversation without abrupt interruptions.
Toggle On: Activating the toggle on a specific intent designates it as the new Fallback Intent. It's important to note that only one Fallback Intent can be active at any given time across all intents. However, even when set as the Fallback Intent, it can still be triggered by its specific utterances like any regular intent and will follow its defined routing paths, whether it's triggered as a fallback or through its direct utterances.
You can further tailor your intents by:
Specifying the platforms each intent is available on for targeted interactions.
Modifying confidence thresholds to gain more precise control over intent recognition.
Redefining the Fallback Intent for improved error handling. It's important to note that each project is equipped with a dedicated Fallback Intent to streamline your setup. When activating the Fallback option for multiple intents, be aware that selecting a new Fallback Intent will automatically deselect the previous one, ensuring a singular Fallback Intent is in effect at any given time.
Intent groups enable you to categorize related intents. They offer a structured approach to handling your conversational model, simplifying modifications, and enhancing system maintenance.
Select the + Add Group button.
The following displays:
Enter the name of the new group.
Click on Add Intent to Group to display a list of available intents.
Choose the intents you want to include and select the Create button.
Once your first group is created, any intents not assigned to a group will be listed under Not Grouped automatically.
You can add an intent to a group using one of the following methods:
Hover over the group to which you want to add the intent.
Click the + Add Intent button.
The following displays:
Type in and add your intent to the group.
Confirm by clicking Done.
Click and hold the intent you wish to group, drag it to the desired group, and release to drop it there.
Click on the intent you want to group.
In the dropdown menu, choose the group for your intent.
Hover over an intent group and click the Edit Name button:
Enter the new name:
Click Edit to confirm the change.
Click the Delete button to confirm.
Overview and example of a Machine Learning Slot
Let's consider an example where a user wants to add an undefined item—strawberries—to their shopping list. The Machine Learning Slot can recognize this and process it effectively.
Hi, please can you add strawberries to my shopping list
In this example, the Machine Learning Slot would process the word "strawberries" as a product entity. It's particularly useful for capturing items that are not defined in a predetermined list.
Here is an example of the kind of data a Machine Learning Slot can produce:
This response indicates that:
The entity recognized is "product".
The word "strawberries" starts at the 40th and ends at the 50th character in the user's sentence.
The confidence level for recognizing this entity is approximately 99.7%.
The value captured is "strawberries".
Parloa plans to launch a new service for validating if recognized items, like "strawberries," are among your offerings, adding an extra layer of functionality.
Despite the potential of ML Slots, the current best practice in Parloa is using Regular Expressions (RegEx) for accuracy and functionality. This approach ensures immediate, reliable results while we prepare for future enhancements.
Simplifying User Input Capture
We have two Slot types: custom slots and prebuilt slots. Prebuilt slots offer a convenient shortcut, matching defined data types such as dates, numbers, or names, straight out of the box. These slots are pre-configured for immediate use, saving time and effort in setup.
Once created, you can add a new slot that can be used in any intent.
Parloa provides a selection of prebuilt slots tailored to specific platforms, enhancing time efficiency and ensuring smooth compatibility across different interfaces.
Click on a slot’s Platforms tab.
Select the most suitable prebuilt slot type from the dropdown menu.
Parloa actively informs you about any deprecated slot types and the discontinuation of features, such as the "Roles" feature in Phone 2, guiding you towards viable alternatives to maintain your bot's functionality.
Intents represent the goals or purposes behind a user's interaction with Parloa. They are composed of , which are specific expressions or phrases users may use when engaging with the bot. An intent could be as straightforward as a greeting or as complex as a request for detailed information.
Click the button.
Enter the name of your intent and click or press Enter.
Click on icon in your list of intents.
Click on the icon.
Click the button, shown below:
Hover over an intent group and click the button. A confirmation window displays:
Machine Learning Slots are an advanced feature available exclusively for Phone 2 in the Parloa platform. Unlike that rely on pre-defined values and synonyms, Machine Learning Slots are designed to handle and make sense of poorly formatted or undefined data. They offer greater flexibility in capturing and interpreting a wider array of user inputs.
Slots simplify the capture of user inputs, serving as the bridge between and actionable data. By accurately capturing , slots enable your chatbot to deliver personalized and contextually relevant responses. A slot can be seamlessly integrated into any part of the conversation, regardless of the platform.
In , click on the Slots tab.
Hover over the slot, and click on the icon.
A list slot can connect Normalized Values
to Synonyms
to connect words to meaning. The following is an example of someone ordering coffee:
I would like to order a flat white please.
In the example, the Normalized Values
would be Coffee, and one of the Synonyms
is flat white.
To use this Slot in Parloa, you must first create a list slot in your Speech Assets. After that, you can add Normalized Values
and Synonyms
.
The following format is expected for bulk import/edit of custom slot types:
It is a comma-separated format in which the first entry of each line represents a specific slot value. The following values are synonyms for the first:
An Overview of Large Language Models (LLM) Intent Classification and Its Implementation in Parloa
Parloa’s LLM Intent Classification enhances your bot’s ability to understand user intents. This feature works alongside the traditional Natural Language Understanding (NLU) approach, which depends on a trained speech model. Unlike traditional methods, the LLM method uses natural language descriptions. This simplifies the process of creating intent systems and eliminates the need for separate intent utterances.
To use LLM Intent Classification, you must activate it, as it is not enabled by default. Follow these steps:
Using LLM Intent Classification involves additional costs. For pricing details or contract modifications, consult your Customer Success Manager.
For new bots: Start with LLM classification to avoid creating multiple example utterances.
For existing bots: Use LLM classification to reduce dependency on speech models for intent recognition.
Write in simple, clear language.
Limit descriptions to 500 characters.
Aim for a length of 70 to 125 words to ensure accuracy and clarity.
To delete values within a slot type, click on the icon next to the synonyms field. To remove synonyms, click on in the synonym token.
It is also possible to import/edit or remove slot types via the bulk edit feature. After clicking the button, a modal containing your existing slot type values will open.
Email our support team at , or
After activation, you can add this feature to and blocks in your bot’s workflow.
Click the block you want to update ( or ).
Go to the Intents tab and open the Detect Intent by dropdown menu.
To enable LLM, select Description from the dropdown menu.
An icon will appear on the block, confirming your selection.
Click the edit icon next to the intent. The following displays, enabling you to enter your description:
Add intent descriptions directly:
Intent descriptions are mandatory. If missing, an error icon will appear, indicating the intent is non-functional:
Utterances –
Description –
Yes, you can selectively apply LLM Intent Classification to specific blocks.
Please reach out to us at .
Description and how to use the Regex slots
For instance, if a customer calls and says, "Hi, my customer number is KV U 3 3452363
, is this correct?", a Regex Slot could be used to validate the format of the provided customer number.
Add to Speech Assets: First, you'll need to add a Regex Slot to your Speech Assets in Parloa.
Define the Regex Pattern: In the Regex Slot, specify the pattern that you want to validate.
For instance, a simplified Regex pattern to match the customer number could look something like this:
Hi, my customer number is KV U 3 3452363, is this correct?
Here is an example of a simple expression:
Let's break down the pattern:
The following table provides some essential regex syntax that can be useful for you:
.
Matches any single character except a newline.
\d
Matches any digit (0-9).
\D
Matches any non-digit character.
\w
Matches any word character (alphanumeric or underscore).
\W
Matches any non-word character.
\s
Matches any white space character.
\S
Matches any non-whitespace character.
[abc]
Matches any character inside the brackets.
[^abc]
Matches any character not in the specified set (not a, b, or c).
*
Matches zero or more occurrences of the preceding element.
+
Matches one or more occurrences of the preceding element.
?
Matches zero or one occurrence of the preceding element.
{n}
Matches exactly n occurrences of the preceding element.
{n,}
Matches n or more occurrences of the preceding element.
{n,m}
Matches between n and m occurrences of the preceding element.
^
Matches the start of a string.
$
Matches the end of a string.
()
Groups multiple elements together and captures the matched text.
`
`
\
Escapes a special character to be treated as a literal character.
Regex Slots offer a robust way to validate specific types of user input in Parloa's platform. Through the use of , you can ensure that the data provided by the user meets certain criteria or follows a specific format.
This slot enables the validation of Parloa slot function values using . For example, a client could call and use their customer number as follows:
Once added to your , you must define the RegEx pattern that the slot should identify.
A
Matches the literal character 'A'.
[ -]?
Specifies an optional space or hyphen character.
\d{7}
Matches exactly seven digits.
If you've already configured a DTMF (Dual Tone Multi-Frequency) intent in Parloa, the system automatically generates a corresponding DTMF slot. This feature enables callers to input specific information, such as customer numbers, using their phone's keypad.
Imagine you need to limit the customer number to a certain range of digits, specifically between 6 and 9. To achieve this, configure the DTMF slot with the following rule:
This rule ensures that only customer numbers falling within the 6 to 9 digit range are accepted.
In scenarios requiring customer numbers to be precisely 8 digits long, use the rule below:
If you haven't done so already, establish a DTMF intent within the Parloa Speech Assets.
After the DTMF slot is created, integrate it into your dialog flow. This allows you to capture and validate caller input based on the predefined rules.
The addition of a DTMF slot introduces a layer of flexibility to your chatbot, facilitating interactions in a manner that's both comfortable and convenient for callers.
LLM capabilities must be enabled for your tenant. Contact your Customer Success Manager for activation.
Use pre-built slots for structured data such as dates, email addresses, or monetary amounts. These slots are optimized for standard data types.
Use LLM slots to extract unstructured or dynamic information that goes beyond the capabilities of pre-built slots.
Pre-built slot use case: "What is your email address?" → Use the pre-built email
slot.
LLM slot use case: "What items do you want to buy?" → Use an LLM slot with a description such as "Extract a list of items mentioned in the user's input".
If the LLM slot is enabled in the block, the block will no longer extract other slot types. Ensure this configuration aligns with your requirements.
Parloa's Dictionary feature acts as a critical resource for handling variations in user language, especially regarding product names or specialized terminology not naturally recognized by standard speech-to-text (STT) or natural language understanding (NLU) systems.
Consider a scenario where callers might describe their contact information in various ways, such as:
User: "My telephone number is ..." User: "My cell phone is ..." User: "My mobile is ..." User: "My phone is..."
Instead of overloading your model with each variant as separate utterances, you can create a dictionary entry to list various phone variants:
To add a new dictionary entry:
Navigate to the Dictionary tab.
Enter the name for your entry and add the corresponding values:
After establishing a dictionary entry, integrate it into your utterances using the following notation:
Be aware that expanding your dictionary increases the number of utterances and the overall size of your speech model. This expansion might challenge platform limits and could inadvertently impact the model's stability, potentially leading to incorrect intent activations due to ASR misinterpretations.
For bulk modifications:
Use the following format for bulk importing or editing:
Example of Bulk Edit Format:
An LLM slot uses a large language model (LLM) to scan user input in a specified or block and extract information based on a custom description. This functionality provides greater flexibility than predefined or machine learning-based slots, making it ideal for extracting complex or dynamic data.
Click the button.
Open a or block.
When the global toggle is enabled, you can control individual slots using their corresponding toggles.
Click on .
To remove a dictionary entry, hover over the entry and click on the icon.
Click the button to display a modal with your existing entries.
Understanding Parloa's Duckling-based Pre-Built Slots
Pre-built slots enable you to process common user inputs consistently and efficiently. These slots, based on Duckling, simplify the extraction of key information from sentences. Let's explore how these pre-built slots work and how you can leverage them effectively.
Imagine a user wants to repay a certain amount of money they owe. For instance:
Hi , I owe $60, can I pay this amount back please?
In this case, the extracted entity is the amount $60
. This is where the amountOfMoney
pre-built slot comes into play. It specializes in handling various currency types.
If you're utilizing platforms other than Phone 2 (such as Phone V2 or Textchat), it's important to define the pre-built slot types within each slot's respective Platforms tab. This ensures consistent behavior and understanding across different interfaces.
The following lists all the pre-built Slot types:
amountOfMoney
Currency types like 10 EUR, 2€, 1.34 Euro, 100 $.
distance
Distances, such as 10 km or 1 cm.
duration
Time durations like 1 second or 3 hours.
email
Email addresses, such as hello@world.com.
number
123 (No spaces between values)
ordinal
Description of an order 1st-45th.
phoneNumber
Phone numbers, including international formats like +49302873737.
dateTime
Moments in time, whether it's yesterday, today, or a specific time.
volume
Volume measurements, like 3 liters or 20 ml.
url
Web page addresses, such as www.parloa.com.
distanceInterval
An interval between two distances: 2 to 3 km, from three to five kilometers.
dateTimeInterval
The duration of time: last week, from yesterday to today, 1 p.m. to 2 p.m.
volumeInterval
A volume range: 3-5 liters, 20-50 ml.
When you select a prebuilt slot, the first tab Info displays all relevant information, including the Available Data in Javascript. The following example shows the Slot amountOfMoney
's available date:
The following table lists the pre-built slots, categorized by language and slot type.
Language
amountOfMoney
creditCardNumber
distance
duration
email
numeral
ordinal
phoneNumber
quantity
temperature
time
timeGrain
url
volume
Afrikaans
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Arabic
✅
✅
❌
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Bengali
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Bulgarian
✅
✅
✅
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Burmese
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Catalan
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
✅
Chinese
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Croatian
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Czech
✅
✅
✅
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Danish
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Dutch
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
English
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Estonian
✅
✅
❌
❌
✅
✅
✅
✅
❌
❌
❌
❌
✅
❌
Finnish
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
✅
✅
❌
French
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Georgian
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
German
✅
✅
✅
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
✅
Greek
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Hebrew
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Hindi
✅
✅
❌
✅
✅
✅
✅
✅
❌
✅
❌
✅
✅
❌
Hungarian
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Icelandic
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Indonesian
✅
✅
❌
❌
✅
✅
✅
✅
❌
❌
❌
❌
✅
❌
Irish
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
✅
Italian
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
✅
Japanese
✅
✅
❌
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
❌
Kannada
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Khmer
✅
✅
✅
❌
✅
✅
✅
✅
✅
✅
❌
❌
✅
✅
Korean
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Lao
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Malayalam
✅
✅
❌
❌
✅
✅
✅
✅
❌
❌
❌
❌
✅
❌
Mongolian
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
Nepali
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Norwegian-Bokmål
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Persian
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Polish
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Portuguese
✅
✅
✅
❌
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Romanian
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Russian
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
Slovak
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Spanish
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Swahili
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Swedish
✅
✅
✅
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Tamil
✅
✅
❌
❌
✅
✅
✅
✅
❌
❌
❌
❌
✅
❌
Telugu
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Thai
✅
✅
❌
❌
✅
✅
❌
✅
❌
❌
❌
❌
✅
❌
Turkish
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
✅
✅
Ukrainian
✅
✅
❌
✅
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
Vietnamese
✅
✅
❌
❌
✅
✅
✅
✅
❌
❌
✅
✅
✅
❌
How to create and manage text snippets
Text snippets are versatile blocks of reusable text that can be seamlessly integrated into different parts of your conversation. They serve as valuable components of the chatbot's responses (prompts) to user inputs and can be referenced in various sections of your conversation flow. This enables you to efficiently reuse the same text in multiple ways throughout your conversation, promoting consistency and ease of management.
The following is an example of a text snippet window:
Click on each tab for a description of the elements displayed above:
The name of the text snippet.
Note: Text snippet names should not start with a number to ensure functionality and cannot be changed once created; to rename, delete the existing snippet and create a new one with the desired name.
This indicates the default text entry or entries assigned to a text snippet, selected without any conditions.
Here, you can add multiple text pool entries for use in your conversations. The app automatically selects an entry according to the selection and rotation rules defined in the Settings tab for this snippet.
Optionally, you can utilize text snippet variables within text snippets.
In the depicted example, the entry includes the company
snippet. The company snippet is defined on the main text snippet screen.
The outcome of the text snippet results in the chatbot telling the caller: "Hello and welcome to Parloa!"
If the text snippet company
has more than one option in the Text Pool, the chatbot selects one based on the rule defined in Settings.
Note: There is no limit to the number of text snippet variables you can employ within another text snippet.
When multiple text snippet entries are defined, the bot will utilize a different one each time the snippet is used in the conversation flow. The sequence in which the entries are presented can be specified in the Settings tab.
The following table describes each of the strategies you can select:
Cyclical
Picks through the list one by one until the last entry is reached. After that, it will start over with the first entry.
If no strategy is specified, the default strategy will be used.
Ordered
Iterates through a list of possible entries, selecting one at a time until the last item on the list is reached. Following this, the last entry in the list will be returned.
Random
Chooses one item at random from a list of possible options.
You can customize your experience in Parloa by using conditional text snippets that take into account specific factors such as the time of day.
The following is an example of morning and evening conditions that determine whether the chatbot will say the default entry, morning entry, or evening entry:
Conditions are evaluated from the top of the list, and the first true condition encountered is applied. If none of the specified conditions are met, the default value is displayed. It's important to note that the order of the conditional entries can easily be adjusted to suit your needs through a click-and-drag action. JavaScript is utilized to create these logical conditions. In the example below, Moment.js and JavaScript are used to check the current time and provide relevant greetings.
Parloa also seamlessly integrates with libraries like Moment and Lodash. When using Moment to check the current time, be sure to set the expected timezone with moment().tz("Europe/Berlin")
to ensure accurate results.
The following describes how to perform common actions with your text snippets:
Optimizing Data Handling with Storage Variables
Storage variables are integral to the functionality of conversational AI, serving as repositories for storing and retrieving data throughout interactions. These variables can encapsulate a wide array of information, ranging from slot values obtained from user input to complex data derived through system processing or user utterances. Utilizing JavaScript, storage variables can manage simple text, platform information, or any other form of derived values.
Due to the transient nature of slot or intent variables, which can be readily overwritten during a conversation, it's critical to secure this information by saving it into uniquely named storage variables. For example, if a bot inquires and receives a 'Yes' response within the first State block, the intent.name
at that moment is 'Yes'. Yet, a subsequent question may prompt a 'No' response, thereby overwriting the intent.name
to 'No'. Without saving these responses in storage, the initial affirmative response would be irretrievably lost.
Best Practice
The most effective method to preserve the integrity of user responses is to save them into clearly named storage variables, such as responseToQuestion1
or responseToQuestion2
. By assigning distinctive names, these variables can be referenced independently, eliminating any ambiguity in subsequent processing.
Examples of Storage Variable Use:
User Information: Retaining the caller's telephone number.
Direct User Input: Storing the exact text of a user's utterance, like 'I want to speak to an agent', or a recognized slot value (e.g., {{slots.HouseNumber=26}}
).
Processed User Input: Transforming a date like 'May 1st' into a standardized format within a storage variable, such as 'May-01'.
Platform Settings: Differentiating between platform variables like 'ChatV2' versus 'Phone2'.
These variables allow the bot to complete tasks more efficiently and guide users through tailored call flows.
To modify the longevity of a storage variable, navigate to the Variables tab and select the appropriate variable. You will then have the option to set the variable's lifetime to suit the needs of your interaction.
If you select the Sessions option, the value of the variable will be maintained throughout the current conversation but will be reset when a new conversation starts.
If you select the Custom option, the duration for which the value is stored can be indefinite or set for a duration of X number of hours, days, or weeks.
Intent variables provide access to various details of the last received intent. Parloa currently supports the following intent variables:
Here is an example of using intent variables to inform the user of the last received intent when an error occurs:
Utilizing Variables to Tailor Conversations Across Diverse Environments
Variables serve as labeled containers that hold data vital to the flow and logic of bot interactions. They dynamically capture and utilize information throughout a conversation, enabling personalized and contextually relevant dialogues.
Data Sources: Variables can store a wide range of data, including user inputs, platform-specific information, or system-generated values, facilitating a rich and personalized interaction.
Types of Variables:
Storage Variables: Capture user input-related data, such as a customer's name (customerName
).
Platform Variables: Hold platform-related data, like the version of the platform in use (PhoneV2
).
System Variables: Contain system-retrieved data, such as the caller's number (callerNumber
).
User Data Retention: Variables marked as 'User' may retain data indefinitely. It's vital to consider data retention policies and privacy regulations.
Personal Information (PI): Avoid storing personal information in variables to comply with data protection laws and safeguard user privacy.
The Definitions tab is where users can define and manage storage variables. Storage variables are used to retain and reference user input or system-generated data throughout the lifespan of a dialog session.
The Name field serves as a descriptor of the variable's content. It's important to choose names that are intuitive, in order to quickly understand what data is being stored.
The Lifetime field specifies the duration that the data will be saved within the system. The available options are:
Session
– Data stored with the 'Session' lifetime remains active only until the current interaction concludes. Once the session ends, the data is deleted.
Custom
– Here, you can define the duration for which the variable data is stored. The default value is set to indefinite
, which means the value will be retained indefinitely. If you do not wish to save this value permanently:
Click on the Custom dropdown menu.
Select either: in hours
, in days
, or in weeks
.
For example, if you select in days
, entering the value 30 will command the system to delete the data after 30 days:
Reflects how the variable is identified when interfaced through an API call. This may be the same as the variable name but can also differ if an API-specific naming convention is required.
Consistent Naming – Ensure variable names are intuitive and reflect their purpose.
API Naming – If the variable will be used in API interactions, ensure the 'Name in API' adheres to any external system's naming requirements.
Caution – Variables with 'session' lifetime are volatile and will be cleared after the session ends. Ensure that any data needed beyond the session is appropriately transferred or stored elsewhere.
The Intent Assignments tab enables the definition and assignment of storage variables based on detected user intent, which can inform routing decisions and categorization of user requests.
Triggering Intent – Lists the user's need as understood by the bot.
Storage Variable – Assigns values based on the triggered intent, guiding the bot's subsequent actions.
For example, linking an Order
intent with phrases like 'I'd like to make a purchase' can automate and streamline order processing.
Intent-Specific Variables – Assign variables to intents in a way that the names and values are relevant to the intent's purpose.
Clear Assignment – Ensure that each intent has a clearly defined storage variable to avoid confusion and potential errors in the dialog flow.
Incorrect assignment of variables to intents can lead to inappropriate responses or actions from the bot. It's essential to verify that the intent and variable match correctly to ensure accurate data handling and user interaction.
Variables are organized into groups, each with a specific prefix to indicate its source and type:
The chatbot's statement to the caller, incorporating the selected text snippet. Similar to SSML fields in Blocks, you can add (<>), allowing for text formatting, and (+) into the texts as well.
to learn more about what else the JavaScript fields have to offer.
Click the button, displayed at the top-right of your screen.
Rename your text snippet in the field.
Click the button, or press Enter.
Search for a text snippet by typing in the search bar, displayed at the top-right of your screen.
Please note that there will be fewer available intent variables listed in the for non-Javascript fields, as some variables are specially intended for use in Javascript fields.
Variables are referenced with double curly brackets using the expression syntax to differentiate and access their values within the conversation flow.
You can access the different types of variables using the autocomplete features. For more information, read our documentation.
Storage
storage.
storage.<variableName>
string
References previously set dynamic values.
intent.confidence
string
The confidence score of the NLU in recognizing the last received intent, expressed as a number ranging from 0.0 (complete uncertainty) to 1.0 (absolute certainty).
intent.group
string
Identifies the group or category to which the last received intent belongs.
intent.name
string
Name of the last received intent.
intent.text
string
Contains the user's exact query or command that triggered the last intent.
intent.rawPlatformRequestBody
object
Provides access to the last received HTTP request body.
Note: The structure of this object depends on the target platform.
intent.rawRequest
object
Provides access to the entire last received request.
Note: The structure of this object depends on the target platform.
intent.requesterId
string
The requester ID sent in the platform request.
intent.rankingRelevant
object
Provides access to the list of all relevant intents in a state with regard to custom thresholds.
intent.rankingAll
object
Provides access to the complete NLU intent ranking based on custom thresholds, without considering relevant intents.
Intent
intent.
string, object
Provides access to information regarding the received intent.
Slots
slots.
slots.<slotName>
string
Storage
storage.
storage.<variableName>
string
References previously set, dynamic values.
Text Snippet
text.
text.<textsnippetName>
string
References text snippets.
Environment
env.
env.<variableName>
string
References previously defined environment variables.
Platform
platform.
string
Provides access to information regarding the platform the skill is used on.
Context
context.
string, boolean, number, date
Provides access to a list of contextual conversation parameters.
Utilizing Environment Variables for Tailored Configurations
Environment variables are a set of constants that hold key information across different releases within a specific environment. These variables are critical in separating the concerns of various environments, such as development, testing, or production, each potentially requiring different configurations. By using environment variables, companies can manage and switch between these distinct settings seamlessly, ensuring that each environment is isolated and configured with its appropriate resources.
For instance, a production environment might interact with a live customer database, whereas a development environment would use a test database to prevent any impact on real customer data. Environment variables can also store contextual details about the release, such as the language or geographical region, allowing for greater customization and localization of services.
Environment
env.
env.<variableName>
string
References previously defined environment variables.
Environment variables are applied to securely configure and differentiate these services for various environments. By configuring these settings through environment variables, you not only maintain a clear separation between your environments but also enable easy adjustments and scaling of your services. When you deploy or update an environment, these variables automatically apply the correct settings without the need for manual intervention.
Moreover, employing environment variables is a best practice for securing sensitive information such as API keys or database credentials. By keeping these details out of the codebase, you ensure they are not exposed, particularly when your code is stored in version control systems.
Enhancing Conversational Dynamics with Reusable Text Snippets
Text snippet variables enable you to reuse predefined pieces of text across multiple responses or within other snippets. They offer flexibility and consistency, ensuring that specific phrases or instructions are consistently communicated in conversations.
Text snippet
text.
text.<textsnippetName>
string
References text snippets.
The "Goodbye" response below incorporates a text snippet variable to reference the snippet named exit
:
In this particular scenario, when the system reaches the end of a conversation, it utilizes the exit
text snippet to deliver a parting message. This message could be something like, "Thank you for calling, have a great day", which is stored in the exit
snippet for easy and consistent reference.
Consistency: Text snippets ensure that certain phrases or messages are delivered uniformly, maintaining a consistent tone and message.
Efficiency: Reusing text snippets saves time by eliminating the need to rewrite common responses.
Flexibility: Snippets can be easily updated in one place, and changes will automatically apply wherever the snippet is referenced.
Clarity: Snippets reduce the complexity of editing and maintaining responses, especially for larger conversational flows.
Capture and Utilize User Information with Slots
Slots are used to store important information during a conversation, allowing the system to understand user requests and provide suitable responses. They enable the system to recognize and remember specific details in a user's input that are crucial for fulfilling requests. Commonly used slots may include a user's name, location, or preferences—all of which are pertinent to the interaction and the AI system's ability to respond appropriately.
The following displays the structure of a slot variable:
slots
slots.
slots.<slotName>
string
Whenever it's necessary to recall information provided by the user, we suggest saving the input value (for example, slots.numbers
) in a Storage block, as seen here:
In the visual example provided, storage.persons
and storage.duration
are variables that reference the numbers
slot, which is assigned to an intent.
You have the capability to access the initial, unprocessed value of a slot with the variable syntax {{slots.slotname_original}}
. This "original value" refers to the exact text as it was understood from the user's input. For instance, the original value for a slot capturing numerical information would reflect the text 'fifteen' rather than the digit '15'. This nuance is crucial, as demonstrated in a transaction within the Debugger tool:
Please note that slot values are designed to be temporary, relevant primarily to the active intent or the current state of the conversation. They will be replaced as soon as a new value is identified in subsequent user input. Thus, it's prudent to consider slots as ephemeral storage points, which retain information only for the immediate interaction.
Add services to you dialog
Services in Parloa Dialogs are designed to extend the functionality of your chatbots, allowing for more dynamic and interactive conversations. These services enable tasks like database access, email sending, and API integrations.
Using services in your Parloa Dialogs can greatly enhance the conversational experience. For example, allow you to:
Access Data – Retrieve personalized information from databases.
Send Notifications – Automate email or notification sending to users.
Process Payments – Manage transaction processes during the dialog.
Handle Transactions – Manage payment processes within the conversation.
Send Email Notifications – Send automated emails based on user interaction.
Integrate with APIs – Connect with external services for additional functionalities.
Go to the Services tab.
Click the Trash icon next to the service you want to remove:
For more control over your services, you can host them on your infrastructure. Contact us for the necessary source code and support.
Leveraging Platform-Specific Data for Enhanced Interaction
Platform variables provide access to platform-specific information. Parloa currently supports the following platform variables:
platform.name
string
The platform is currently being used. Possible values:
- phoneV2
- phoneV1
- gaction
- whatsapp
platform.phoneCallerNumber
string
The phone number of the person calling. (Phone V1 only)
Deprecated: Please use platform.callerId
instead.
platform.callerId
string
If available, the caller's phone number. If the caller is anonymous, the caller ID is provided. (Phone V1 & V2 only)
platform.sipInviteHeaders
array
The key-value pairs are provided in the SIP Invite message (Phone V2 only)
For example, if the platform.name
equals "phoneV1", the bot could execute a set of commands tailored for voice interactions over the phone. If the platform.name
indicates another platform like WhatsApp, the bot may choose a branch that handles text-based interactions differently, like quick replies or chat buttons.
A Condition block checks for the value of platform.name
.
Based on the platform identified, the conversational flow diverges onto the relevant branch, ensuring that the interaction is appropriate for the specific platform.
Publicly available services provided by Parloa
Parloa offers a wide range of services that you can utilize to add extra functionality easily to each dialog.
This service extension provides more control over parsing dates if given in an unusual format, especially in combination with STT issues.
This advanced service extension significantly enhances the flexibility and accuracy of date parsing, particularly beneficial when handling dates presented in non-standard formats. This is an essential feature for addressing challenges that arise from Speech-to-Text (STT) inaccuracies or unconventional date representations. By leveraging this extension, users can ensure that dates are interpreted correctly, regardless of format peculiarities, leading to more reliable data processing and interpretation outcomes. It is an invaluable tool for projects where precise date recognition and accuracy are critical, making it easier to manage and analyze time-sensitive data accurately.
success
: A valid result was provided.
ambiguous
: The input had multiple interpretations, resulting in multiple options in resultList
.
invalid
: The input could not be identified as a valid date.
slot
: The dateTime
slot automatically extracted. Make sure it's activated in your speech assets.
locale
: Locale of the input. Currently, only the German locale (de) is supported.
twoDigitThreshold
: This setting enables control over two-digit years. For example, setting it to 20 makes the system interpret '20' as 2020 and '21' as 1921.
maxYear
: Sets the upper limit for valid years. Useful for birthdate verification.
partialReturn
: Enables the system to return just the day and month for dates exceeding maxYear
.
mode
: Choose between three modes – birthday
, pastdate
, and futuredate
. The service defaults to birthday
.
result
: The first valid result.
resultList
: The full list of valid options in case an ambiguous input is received.
year
: Year of the main result. Empty if partialReturn
is active and the input/recognized year is bigger than maxYear
.
month
: Month of the main result.
day
: Day of the main result.
URL: https://parloaservices.azurewebsites.net/api/ParseBirthdate?clientId=<CLIENT_ID>
Header: x-functions-key
: <AUTHCODE>
For authentication, replace <AUTHCODE>
and <CLIENTID>
with the values provided by your Parloa representative.
Add the service to your dialog as usual. Below is a sample set of inputs:
Define the maximum valid year according to your needs.
For birthdates, consider a value like 20
; for future dates, a higher value like 99
is suitable.
This is the default setting for the mode
field and controls the general behavior of the Service. With this setting, the Service processes the input date as-is without expecting it to be from the past or the future.
Use this mode when you expect the date input to be a past date.
If the input date is from the past, the Service parses it and provides successful output.
Otherwise, the Service returns an invalid result. Incomplete input dates are automatically assumed to be past dates. For example, if the year is missing, the Service fills it in based on the pastdate logic.
Use this mode when you expect the date input to be a future date.
If the input date is from the future, the Service parses it and provides successful output.
Otherwise, the Service returns an invalid result. Incomplete input dates are automatically assumed to be future dates. For example, if the year is missing, the Service fills it in based on the futuredate logic.
Input
Two digit year that the automatic slot turns into a future date:
Output
Year is set to 1942
due to the twoDigitThreshold
being set at 19, which is less than 42.
Input
Input given either via DTMF or said without delimiters:
Output
Since the month and day are unclear, two date options are returned, 1990-11-01 and 1990-01-11.
Incomplete date is given which is by default assumed to be in the current year (without partialReturn
):
Output: Invalid since 2021
>2006
The same with partialReturn
active:
Output: partial date is returned
Month input utterance that the service converts into a future date:
Output: Service converts the incomplete date to 24-07-01
.
2. The following input utterance is an incomplete date, which the service handles as shown:
Output: The service can detect from the input utterance:
Input is given as "mm yy" => converted to 24-11-01
Input is given as "dm yy" => converted to 24-01-01
This service extension provides more control over how dates are parsed if they are given in an unusual format, especially in combination with STT issues.Example flow with the birthdate serviceService configurationsuccess : a valid result was returnedambiguous : the input was ambiguous, and resultList contains multiple optionsinvalid: the input could not be mapped to a valid dateutterance: The raw utterance of the user. You can retrieve it in a storage block with intent.rawPlatformRequestBody.text (phoneV2 only) and use the storage variable subsequently in the service call.slot: The automatically extracted dateTime slot (ensure that it is activated in your speech assets).twoDigitThreshold: Configuration parameter that describes when a year is given only as two digits (e.g. 90 instead of 1990), until which number it is assumed to be 19XX vs 20XX. The default behavior of the current slot extraction uses the closest date to today, which makes 60 become 2060. This allows controlling this by setting it to a lower number, e.g. 20, which results in 20 -> 2020 but 21 -> 1921.maxYear: Configuration parameter that describes the maximum year that is considered a valid result. Use if e.g. for birthdate checking, by setting it to the current year if any past date is valid, or to a year that is assumed to be the minimum age of users contacting your bot (e.g., today minus 16 or 18 years: moment().year() - 18).locale: Locale of the input. Currently only de is supported (default: de).partialReturn: Set it to any non-empty value to have the day and month of dates that are beyond the maxYear returned instead of returning an empty result.mode: This optional input field defines which of the 3 possible modes of the Service you choose to use, from the options: birthday, pastdate, futuredate . The default value of the Service is birthday, even when this field is not used/filled. Further usage of pastdate and futuredate modes are explained further below. This input field is case insensitive, as parsing is performed in the service.result: The first valid resultresultList: The full list of valid options in case an ambiguous input is receivedyear: Year of the main result. Empty if partialReturn is active and the input/recognized year is bigger than maxYear.month: Month of the main result.day: Day of the main result.URL: https://parloaservices.azurewebsites.net/api/ParseBirthdate?code=<AUTHCODE>&clientId=<CLIENTID> (you can get your own AUTHCODE and CLIENTID from your Parloa representative)Note that the URL includes the API key to access the function. No separate header is needed.Dialog usageAdd the service to your dialog as usual. A sample input set is shown below:utteranceslotlocaletwoDigitThresholdmaxYearpartialReturnThe raw input of the user, usually obtained by using a storage block before the service call and setting a variable (e.g., utterance
) to the Javascript value of intent.rawPlatformRequestBody.text
.Add it to the input by clicking the plus-symbol, or typing "+" and selecting the storage variable set in the previous storage block.
birthdaypastdatefuturedateThe futuredate
value is used when the input value of the utterance is expected to be in the future.The logic is as follows:
If the utterance field contains a date in the future, then it is parsed and returned in the respective output fields, with a success branch output.
Otherwise, an invalid result is returned, with no output in the output fields.
The futuredate
mode does not handle only complete dates (e.g. dd-mm-yyyy
) as input, rather it analyses the input of the user and it tries to generate valid dates with the input given. In case of incomplete input dates, such as when we don't specify the year, the service will try to complete them as a day in the future. In the other cases where the year is specified, the service will not try to alter the date to convert it to a future date. In ambiguous cases, such as when a number may be interpreted as a month or as a year, the service returns only those dates which are future dates. Further examples are detailed in the following section.
Two Digit YearAmbiguous unformattedPartial returnFuturedate modePastdate mode
1.Month input utterance that the service converts into a past date:
{"input": {"utterance": "02 Dezember","twoDigitThreshold": "19","maxYear": "2025","mode": "pastDate"}}Output: Year is set to 2022
because the of the pastdate
mode defined{"choice": "success","output": {"result": "2022-12-02","year": "2022","month": "12","day": "02"}}2. The following input utterance is an incomplete date, which the service handles as shown:{"input": {"utterance": "01 11","slot": "","twoDigitThreshold": "80","maxYear": "2024","mode": "pastDate"}}Output: The service can detect from the input utterance:
Input is given as "dd mm" => converted to 22-11-01
Input is given as "mm yy" => converted to 11-01-01
{"choice": "ambiguous","output": {"result": "2022-11-01","year": "2022","month": "11","day": "01","resultList": "2022-11-01,2011-01-01"}}The today
date is "10-Jan-23" for both these examples
Utilizing Context Variables for Dynamic Dialog Management
Context variables cover a broad range of conversational parameters beyond single interactions. Parloa currently supports the following context variables:
They enable dynamic responses based on the conversation history.
Context variables can be used to repeat or reference previous interactions.
Some context variables are specifically intended for use within JavaScript fields to allow for advanced scripting and customization of the dialog flow.
In the non-JavaScript fields of Parloa's platform, you might encounter a limited set of context variables available via the autocomplete feature. This is due to certain variables being designed exclusively for JavaScript contexts where more complex logic can be applied.
It's crucial not to confuse these context-specific variables with those used for service calls, which are designed for external interactions, such as API requests.
The example provided showcases the use of context.previousPrompt
and context.previousPromptText
within a dialogue block. These particular variables enable the system to repeat the last prompt heard by the user. This functionality is invaluable when the user has not understood or responded appropriately, requiring the system to reiterate the previous message.
The context.previousPrompt
variable will repeat the last SSML (Speech Synthesis Markup Language) content, which might include specific intonation or speech pacing instructions.
The context.previousPromptText
variable repeats the plain text version of the last message, which is useful for text-based platforms or debugging purposes.
Developing Custom Services for Integration with Parloa
For seamless integration, Parloa mandates adherence to a specific request-response format within the dialog graph.
Parloa issues an HTTP POST
request to the service URL defined within your configuration, carrying a JSON payload that includes additional headers as outlined in the technical settings. The request body, structured as shown below, contains dialog-generated variables and meta information about the interaction:
The variables generated during the graph execution are provided in the input
property as string values. Further meta information about the dialog and the interaction is provided in the context
field.
A response with an HTTP status code of 200
is mandatory, with any other codes signaling a fault that redirects to the error
branch. For predictable errors, like invalid inputs, send a status code of 200
with an error message in the body. The expected response JSON format is:
Avoid Downtimes – For instance, during deployment. Invest in a zero-downtime deployment process, or use another approach to ensure that the API endpoints are always available. Otherwise, ongoing bot dialogs may fail.
Input Strings Only – Only string values can be sent out of Parloa. You can, however, JSON-stringify objects or arrays if needed, and process them accordingly in your service.
No Global State – Parloa cannot store information from the service requests beyond the current dialog execution.
The Address Search service returns a formatted address based on the unformatted address input by you. The formatted address is useful for matching with customer addresses that may be stored in third-party systems. The Address Search service can utilize one of three different third-party address processors: Azure Address Search, Google's Geocode Search, or HERE Geocode Search.
validAddress
: A valid result was found.
invalidAddress
: No valid result was found.
address
: The unformatted address to resolve.
apiKey
: API key for the Address API to be used. This field is optional; if left empty, a key owned by Parloa will be used for the service requests. If you want to use your own key, enter it as the value of this field.
service
: One of azure
, google
, or here
.
language
: Language of the input, for example, de
.
country
: Restrict results to this country. Google and Azure require a 2-letter country code (e.g., DE
), while HERE expects a 3-letter country code (for example, DEU
).
postalCode
: Optional value that restricts results to this postal code.
limit
: Optional value that limits the number of results. Min=1, Max=3. If left empty, the default value is 3.
skipUnset
: Set to a non-empty value to activate. If active, output parameters with empty values will not be set. Note: This will cause Parloa to retain old values instead of resetting them when calling the service multiple times.
rawOutput
: Optional value that returns the raw result from the Address API. Set to a non-empty value to activate.
formattedAddress
: The first result as a properly formatted address string.
The fields listed below will contain the values of the formatted address separately, according to the variable name:
postalCode
city
street
streetNumber
country
state
county
apartment
position
lat
lng
These parameters are repeated for the second and third results when multiple results are returned: formattedAddress2
, postalCode2
, city2
, street2
, streetNumber2
, country2
, state2
, county2
, formattedAddress3
, postalCode3
, city3
, street3
, streetNumber3
, country3
, state3
, county3
.
rawOutput
: Direct API result if activated.
URL: https://parloaservices.azurewebsites.net/api/AddressResolver?clientId=<CLIENT_ID>
x-functions-key: <AUTHCODE>
(You can obtain your AUTHCODE
and CLIENTID
from your Parloa representative).
Note: The URL provided does not include the API key, which is required in the header (x-functions-key
) to access the API.
Add the service to your dialog as usual. A sample input set is shown below:
Address search using Azure search with input fields:
Output fields showing the results, limited to the first address only.
The rest of the output fields will be set to empty.
Address search using Azure search with input fields:
Output fields showing the results, limited to the first address only. The raw output field at the bottom shows the raw result format for the Azure Search service; the other two search services may have differing raw formats:
The rest of the output fields will be set to empty.
For a List of intent keys, see .
Provides access to a of the last received intent.
For a List of platform keys, see .
For a List of context keys, see .
Environment variables are prominently used within service configurations to manage technical details that vary between environments. The following demonstrates how you can set and use these variables within the screen:
Provides access to a of the last received .
Contact Parloa support at for a clientID
and APIKey
.
An exemplary use case of platform variables is within a block in Parloa. Here's how it's typically structured:
utterance
: The raw utterance of the user. You can retrieve this from a block with the variable intent.rawPlatformRequestBody.text
(applicable to phoneV2 only).
Use a block to capture raw input and set a variable to intent.rawPlatformRequestBody.text
.
Add the dateTime
slot, which must be activated in .
It is crucial that the choice
field is present in the response, as this determines which path to follow in the dialog graph. Information from the service, required for the subsequent dialog execution, must be provided in an output
object. These values can then be easily copied to a variable in Parloa. In this context, it is also possible to return non-string values, which requires further processing in blocks with JavaScript to extract relevant values for use in other contexts.
High Availability – Ensure that the APIs have a maximum response time between 4-9 seconds, depending on the platform the bot runs on. See for more information.
Authentication – Static authentication methods, such as basic authentication and API key headers, are supported out of the box. See for more information.
Parloa offers a suite of common services ready for use. Familiarize yourself with Parloa's , including the service which adapts existing APIs to Parloa's expected format.
Visit our GitHub repository for service examples which can serve as templates for your service development:
context.accessToken
string
The user's access token is only provided for requests made by users who have linked their account with your project.
context.accountIsLinked
boolean
Indicates whether the current user has successfully performed account linking in the past, but does not ensure that the user's account is still linked. If a user revokes the permission in their linked account, the placeholder will still resolve to true
because the system has no way of knowing that the permission was revoked in a third-party system.
context.isNewConversation
boolean
Indicates whether a new conversation is starting with the current request.
context.lastInteraction
date
The date of the last interaction between a user and your project.
Note: This value is null during the first interaction.
context.lastServiceCallError
string
In the event of a failed service call, the error message that is returned can be:
HTTP_ERROR
: The received response code is not in the range 200-300.
ECONNREFUSED
: No connection could be made because the target machine actively refused it.
ECONNRESET
: A connection was forcibly closed by a peer.
ECONNABORTED
: Service timeout
ENOTFOUND
: Indicates a DNS failure.
ETIMEDOUT
: A connect or send request failed because the connected party did not properly respond.
EPIPE
: A write on a pipe, socket, or FIFO for which there is no process to read the data.
ECONNABORTED
: The network connection has been aborted.
UNKNOWN
: Anything else went wrong.
context.lastServiceCallStatus
number
The HTTP Status code received from a Service in the last service request performed. If the last service call returned an error, this variable will be undefined
.
context.locale
string
Contains the language code of the release in BCP 47 format, such as de-DE or en-GB.
context.platform
string
The platform is currently being used. Possible values:
phoneV1
phoneV2
gaction
context.previousPrompt
string
The SSML prompt sent with the last response.
context.previousPromptText
string
The prompt displayText sent with the last response.
context.previousReprompt
string
The SSML prompt sent with the last response.
context.previousRepromptText
string
The prompt displayText sent with the last response.
context.reasonLastConversationEnded
string
The reason why the last conversation ended. Possible values are:
PARLOA_INITIATED
USER_INITIATED
NO_INPUT
ERROR
UNKNOWN
context.requestCount
number
Track of the number of back-and-forth interactions a user has had per release. The count starts at 1 for the initial interaction and increases with each subsequent exchange. It resets with different releases, only measuring the turn count for the current release. On the other hand, context.sessionCount counts the total calls a user has made to this release.
context.requestCountInSession
number
Tracks the number of conversation turns in the current session. A turn consists of a user input and a bot response. For example, a phone call initiated by the user and the bot's greeting count as one turn. The count starts at 1 and increments with each turn. It resets to 1 with each new phone call.
context.requestScopedRandomValue
number
A random float value in the range [0-1
] is constant during a request but changes on each new request.
context.sessionCount
number
The number of conversations a user has had with a specific release, starting at 1 for the first conversation and incrementing with each new conversation. Different versions within the same release do not reset this counter, but different releases will have separate counts, essentially tracking the user's phone call count per release.
context.textblockUsages
string, number
This object contains the usage count for each text snippet. It can be accessed as follows: context.textblockUsages.<textsnippetName>
Parloa’s License Plate Validation Service is designed to process and validate license plates from specified regions – Germany (de), Switzerland (ch), and North America (NA). It checks for common mis-transcriptions between numbers and letters, validates against region-specific license plate formats, and returns the processed input.
Input can be provided as a non-delimited string, like “B ER 1234,” or delimited with a “minus” – “B minus ER 1234.” The latter helps reduce ambiguity, especially beneficial when using phoneV2, as transcription may not reliably add spaces due to factors like talking speed.
For North American regions, separate the region identifier (e.g., state or province) and the license plate, then input both parameters into the service.
Hint: Retain spaces in states or provinces with multi-word names, such as “District of Columbia” or “Nova Scotia.”
single
: A valid single result was found.
multi
: Multiple candidate license plates were found due to ambiguous input or multiple results like “Frankfurt.”
fail
: No valid license plate was detected.
utterance
: The raw utterance of the caller. Retrieve it in a storage block with intent.rawPlatformRequestBody.text
(phoneV2 only), and use the storage variable subsequently in the service call.
region
: Select between “de” (default), “ch” for Swiss, or a North American region* for license plate recognition.
Note: For North American regions, your input to the region parameter should be the name of the state or province.
single
: The single valid result formatted without delimiter
ssml
: The result wrapped in <say-as interpret-as="characters">...</say-as>
SSML, with groups separated by short breaks. Returned as a JSON-stringified array.
list
: In case of ambiguous input or multiple results, a JSON-stringified array of candidates is provided.
countyList
: In case of ambiguous input or multiple results, a JSON-stringified array of county names is provided.
For ambiguous input, the index of array results is aligned between all output parameters (list, countyList, ssml).
URL: https://parloaservices.azurewebsites.net/api/ValidateLicensePlate?clientId=<CLIENT_ID>
Header: x-functions-key
: <AUTHCODE>
For authentication, replace <AUTHCODE>
and <CLIENTID>
with the values provided by your Parloa representative.
License plate from region de
:
License plate from region North America:
Branches are different types of cases handled by the service in question. For example, you can define a success
branch that handles the successful execution of a service, as shown below. You can give your branches custom names that will then appear in the graph, as well as an optional description. If you do not specify a particular name, the default API name branch_0
will be selected on default. New branch additions will be incremented by 1.
If a branch with the name error
exists in the service definition, which is added by switching the "Error Branch" toggle from service block, it is triggered if the service fails.
If the branch is not added, the on error
branch in the Global Intents is triggered in case of an error.
This service enhances the accuracy of transcriptions by correcting common misinterpretations of alphanumeric characters spelled out during phone calls.
For an illustrative guide on the Spelling Post-Processing service, watch the provided short video:
success
: Indicates successful processing of the input.
utterance
: Retrieve your raw utterance from a storage block using intent.rawPlatformRequestBody.text
(phoneV2 only). You can then use this storage variable in the service call.
language
: The language of your input. Currently, German (de) and English (en) are supported.
numbers
: (Optional) Specifies whether mis-transcribed numbers should be corrected. Pass any non-empty value to activate. Note: This parameter is enabled by default for backward compatibility.
letters
: (Optional) Specifies whether mis-transcribed letters should be corrected. Pass any non-empty value to activate. Note: This parameter is enabled by default for backward compatibility.
result
: Your processed input.
ssml
: Your result formatted in SSML, wrapped in <say-as interpret-as="characters">...</say-as>
. Spaces are used to indicate groups separated by a short pause.
URL: https://parloaservices.azurewebsites.net/api/SpellingProcessing?clientId=<CLIENT_ID>
Header: x-functions-key
: <AUTHCODE>
For authentication, replace <AUTHCODE>
and <CLIENTID>
with the values provided by your Parloa representative.
Variables required for service inputs may come from storage variables (for example, lastUtterance
) or be directly input as plain text (for example, the language code en
).
You can use the SSML output to instruct the chatbot on pronouncing each individual letter or number clearly in the dialogue. Alternatively, you may employ the "result" output for further processing and integration of the corrected utterance into the conversation flow.
Configuring technical aspects for optimal analytics gathering
To process requests sent by Parloa, you'll need to define an HTTPS endpoint.
Protect your endpoint against unauthorized access by choosing between HTTP Basic Auth and Bearer token authentication. Configure these in the Configuration tab.
Optionally, employ a custom header, such as x-api-key
, for an additional layer of security.
Authorization token
Token
Basic authorization
Username, password
Custom headers that will accompany your request can be specified as needed.
By enabling the cache for a particular service, you cache responses for the current version. The cache is keyed based on the release, input parameters, and the service URL.
Parloa will attempt a retry if:
A timeout occurs.
The status code is within the range of 502 to 504 or equals 429.
Retries will persist until they are exhausted, after which standard error procedures take over.
Customize Parloa's timeout settings between 0ms to 20,000ms by toggling the Override Service Timeout switch on the Service screen.
The default timeout is 4000ms if none is specified.
A specified timeout will take precedence and replace the default setting.
In a timeout scenario, calculate the duration using the formula: service timeout + retries * (service timeout + 50ms)
, where 50ms
accounts for the retry delay.
Parloa sends requests in JSON format via HTTP POST, which includes context and input data tailored to the platform.
The request body sent to your endpoint contains a predefined set of properties, as illustrated below:
Key
Type
Description
context
object
Contains an object with context information.
context.releaseId
string
The Parloa ID of the release which initiated the request.
context.userContextId
string
An internal Parloa ID representing a conversation by mapping the user ID with the release ID.
context.platform
string
The platform interacting with the release. Possible values: -phoneV1
- phoneV2
- alexa
- dialogflow
context.request
object
The raw request received by Parloa from the platform.
context.conversationId
string
input
object
Contains the key-value pairs representing the input parameters defined in the correspondingService
block.
The structure of the request body differs based on the platform. Here are examples for phoneV2 and Alexa:
Parloa anticipates a 200 HTTP status response in JSON format.
Parloa expects to receive a response body structured as follows:
choice
Yes
string
The name of the branch that should be followed after the Service
block.
output
Yes
object
responseOverwrites
No
object
An object which can contain key-value pairs where the key is the platform name and the value is an object representing a partial response for the platform which will be merged with the response generated by Parloa. Please take into account that no deep-merging will be performed and high-level keys defined in the response will overwrite those set by Parloa. Possible platform keys:
- alexa
- phoneV1
- dialogflow
.
dynamicEntityOverwrites.alexa
No
UpdateDynamicEntities[]
dynamicEntityOverwrites.<dialogflowPlatform>
No
SessionEntityType[]
Possible Dialogflow platform keys:
- dialogflow
- phoneV1
- whatsapp
speechToTextHints
No
string[]
An array of words of phrases to be used as hints during speech recognition of phoneV2
releases.
An external service must respond within a given timeout with a successful status code, otherwise, Parloa will delegate the control to the error
branch. For more information, please refer to Parloa Errors.
Dialogflow
4 Seconds
4 Seconds
PhoneV1
4 Seconds
PhoneV2
9.5 Seconds
Here you can find a sample response for the basic return of data, responseOverwrites
and for dynamicEntityOverwrites
:
Validate and process IBAN information to correct common phone transcription errors and ensure IBAN validity, including SEPA compliance checks.
The IBAN Validation service is designed to validate and correct entered or transcribed IBANs, identifying and fixing common errors during phone communication, ensuring accurate IBAN details.
Beginners: Validate and correct IBAN details with ease.
No specialized technical knowledge is required.
Validates IBANs in real-time.
Identifies common input errors such as:
Missing country code.
Incorrect format.
Inaccurate length.
Invalid checksum.
Service is hosted exclusively in the EU.
This is a sample configuration for an invalid IBAN:
Output from the debugger:
This service returns and/or sets STT (Speech-to-Text) hints for all street names in a postal code area (Germany only). It is particularly useful for an address dialog where the postal code is first inquired, allowing the STT to be primed for all valid streets in that area. This prevents potential issues with transcribing certain street names.
Add the service to your dialog as usual. A sample input set is shown below:
The service expects a 5-digit ZIP code, currently only from Germany.
For example:
This field acts as a boolean. Enter any value to activate it. When activated, an extra output field is provided by the service, containing all the street names belonging to the ZIP code organized as separate strings in an array. This output object is used by the NLU to better match the caller's input.
This field acts as a boolean. Enter any value to activate it. When activated, a list of numbers from 1 to 19 will be added to the speechToTextHints
, helping the NLU model account for street numbers given by the caller.
For example:
Basic request of street names:
Output
success
: Indicates the email was sent successfully.
Fail
: The email could not be sent.
mail_to
: Recipient(s) of the mail.Input must be a valid email address. For multiple recipients, separate each address with a comma (for example, user1@mail.com,user2@mail.com
).
mail_subject
: The subject line of the email.
mail_body
: The main content of the email.
mail_cc
: Specifies CC recipients, allowing you to include additional recipients in a manner that is visible to all other recipients.
error
: Provides detailed information regarding the error if the email fails to send.
URL: https://parloaservices.azurewebsites.net/api/SendMail?clientId=<CLIENT_ID>
Header: x-functions-key
: <AUTHCODE>
For authentication, replace <AUTHCODE>
and <CLIENTID>
with the values provided by your Parloa representative.
By clicking on the in your service, you can add branches.
The error code is deemed retryable according to .
The unique conversation identifier of the current conversation between the user and your bot. You can use it to retrieve the corresponding .
Be aware that these properties are exclusive to a service call and should not be mistaken for .
An object containing key-value pairs that match the output defined in the .
A list of directives. They are expected in the Alexa specific format. If provided they will be written into the Parloa generated Alexa response.
The uploaded dynamic entities time out after 30 minutes on Alexa.
A list of . They are expected in the Dialogflow specific format. If provided they will be written into the Parloa generated Dialogflow response.
success
: A valid IBAN that passes SEPA tests if enabled.
incomplete
: Appears to be an IBAN but is incomplete based on country-specific format requirements.
no_country
: Missing two-letter country code in the input.
not_sepa
: A valid IBAN outside the SEPA area when SEPA test is enabled.
fail
: All other cases where a valid IBAN is not identified.
You can omit any branches that are not relevant to your specific error handling needs.
utterance
: The raw input from the caller. Access this in Parloa via a storage block with intent.rawPlatformRequestBody.text
(phoneV2 only) and pass it to the service.
assertSEPA
: Set to any non-empty value to perform a SEPA area check on a valid IBAN.
iban
: The correctly formatted IBAN when validation is successful.
ssml
: The IBAN in SSML format, with characters separated by short pauses for clarity.
error
: Detailed error message when the IBAN validation fails.
Possible Errors (source):
WrongBBANFormat
WrongBBANLength
WrongIBANChecksum
WrongAccountBankBranchChecksum
URL: https://parloaservices.azurewebsites.net/api/ValidateIBAN?clientId=<CLIENTID>
Header: x-functions-key: <AUTHCODE>
Replace <AUTHCODE>
and <CLIENTID>
with credentials provided by your Parloa contact.
This is a sample configuration for a valid IBAN:
Output from the debugger:
success
: Indicates the service call was completed successfully.
zip
: Enter the postal code/ZIP code to retrieve street names.
stt
: To set STT hints, input any non-empty value. This activates STT hints for street names.
includeNumbers
: Option to append numbers 1-19 to the STT hints, enhancing recognition accuracy.
streetNames
: Returns a comma-separated list of street names for the specified area.
Note: If the 'stt' input is provided, the output will include a special speechToTextHints
value, offering tailored STT support.
URL: https://parloaservices.azurewebsites.net/api/StreetNames?clientId=<CLIENT_ID>
Header: x-functions-key
: <AUTHCODE>
For authentication, ensure to replace <AUTHCODE>
and <CLIENT_ID>
with the credentials provided by your Parloa representative.
Managing User Interactions
Manage unexpected user inputs to ensure smooth and engaging conversations.
Continue Listening
Enables your system to stay attentive to user input.