Tutorial: Elevating museum experiences with personalized digital guides
A step-by-step guide to building an example solution that delivers hyper-personalized content in real time.
This tutorial explores an example solution powered by Virtue's framework that enhances end user experiences by delivering content that is meticulously tailored to the interests, personality, and mood of users in real time.
During interactions with users, the solution delivers dynamically-curated content designed especially for them to increase their engagement and satisfaction.
Business case
Cultural centers such as museums can elevate the experiences they offer to visitors by using personalized digital guides. These intelligent guides can intuitively adjust narratives and vary content in real time, providing visitors with exclusive personalized experiences that cater to their preferences and respond appropriately in accordance with their emotional state.
This, in turn, can enable these centers improve the experiences they offer and boost loyalty and customer satisfaction.
Challenges
Setting up a digital guide has the following challenges:
Conventional digital guides are unable to predict users' profiles and real-time mood to define optimum flows and enhance experience delivery.
They are also unable to customize and curate content to deliver high personalization.
As a solution designer, you have limited observability over user engagement and data to dynamically improve user experience.
Designing and implementing a Virtue-powered solution
Leveraging Affective Computing by Virtue's framework when implementing your solution for this business case is an excellent choice.
You can take advantage of features such as psychological profiling and continuous user calibration to increase customer retention and provide deeply personalized experiences.
Overview
An Affective Computing by Virtue-powered solution that aims at personalizing end-user experiences typically consists of the following phases:
Data collection - Diagnostics and calibration
Diagnostics is the process in which EDAA™, the underlying framework of Affective Computing by Virtue, analyzes and establishes a preliminary (baseline) psychological profile for each user.
Calibration is the process of validating and adjusting each user's previously-established psychological profile based on their current psychological state.
For detailed information, see Diagnostics and calibration.
Both these processes are important parts of data collection, which is the first phase of implementing the solution. Data collection provides your solution with baselines, context, and boundaries.
To collect this initial data, you can either run a real experience or input synthetic data from an external data set.
Interactions between users and the solution
In the next phase of implementing the solution, EDAA™ interacts with end users based on the preconfigured logic blueprints.
For example:
On entering the museum, each user is greeted with:
A personalized audio message that addresses them by name (for example, "Hello, John!")
Welcoming music tailored to their individual preferences
Additionally, throughout the experience, the personalized digital guide has interactive conversations with each user and recommends museum installations that they would enjoy based on their psychological profile and real-time mood.
Result analysis - Visualizing data
In the final phase of implementing the solution, you can analyze user engagement levels in each interaction session to gain data insights and visibility into the solution's performance, which in turn can help you do the following:
Enhance learning and exploration and encourage deeper engagement with the subject matter.
Dynamically guide visitors through personalized museum journeys based on their individual interests, behaviors, and real-time crowd dynamics.
Optimize visitor flow and satisfaction.
Implementing the solution
The process of implementing the solution consists of the following steps:
Step 1: Create the project, which is used for both data collection and running the experience
Working with our APIs
As an internal user responsible for setting up and managing an Affective Computing by Virtue project, you must use the API key and access token that enables you to integrate your front end with Orchestra to work with our APIs.
For more information, see Authenticating your data source to use our APIs.
Step 1: Create the project
You can create a project by following the procedure described in Creating and managing projects. You can use this project for both data collection and running the experience.
Example: Creating a project
The following example illustrates how to create a project using the Project/Create API:
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
{
"projectName": "Museum Digital Guide",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 10, // 10 -> Music personalization by emotions
"project_Purpose": "General", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": true, // Runs the project in product calibration mode.
"interactionSetups": [
{
"interaction_setup_id": 5,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3, // Configuring the interaction setup for Speech
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 8, // Configuring the interaction setup for Time to answer
"interaction_mode": "user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Museum Digital Guide",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 10,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "General",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Step 2: Parameterize the project
After creating the project, you can parameterize it to customize the behavior of Affective Computing by Virtue according to the requirements of your solution.
To learn more about parameterization, see Parameterizing a project.
During this step, you can do the following:
Parameterize data inputs
You must declare (define) data inputs for each type of (raw or synthetic) external data you want to utilize in your solution. You can do this at the same time as creating the project.
For this solution, you can configure the following data inputs:
Real human user
Physiological
Speech, through a microphone
User motion
Declaration of rooms within the museum
Conversation initiation or trigger and answers (Time to answer, number of words, pauses between words, duration of the answer, meaning, and more)
Facial recognition, scanning a QR code, and other interactions with a front-end of an application depending on how the interactive experiences in the museum are designed
For more information, see Understanding data inputs.
Example: Configuring data inputs
The following example illustrates how to declare data inputs using the interactionsSetups parameter of the Project/Create API.
This parameter enables you to declare inputs and their respective operational modes.
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project. Also see Viewing available data inputs (interaction setups).
Request sample
{
"projectName": "Museum Digital Guide",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 10, // 10 -> Music personalization by emotions
"project_Purpose": "General", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": true, // Runs the project in product calibration mode.
"interactionSetups": [
{
"interaction_setup_id": 5,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3, // Configuring the interaction setup for Speech
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 8, // Configuring the interaction setup for Time to answer
"interaction_mode": "user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Museum Digital Guide",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 10,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "General",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize actions and attributes
Affective Computing by Virtue supports the following categories of actions:
Content
Delivering any media, such as images, video, or sound.
Interactions
Delivering statements or asking questions
Triggered actions
Delivering an action as a response to specific events or conditions.
For this solution, you can create all three categories of actions for different contexts:
You can create content actions to deliver content related to the museum experience. For example, in a digital art museum, you can display images, animation, or videos on a screen and play background music in a loop.
You can create interaction actions for conversations between the digital guide and the user or sharing audio descriptions about an exhibit with the user.
You can create triggered actions to deliver personalized content. For example, when a user enters a room, the solution can display a customized greeting on a screen ("Welcome to the Victorian Room, John!").
Attributes can be considered as "folders" that group related or similar actions.
For example, an attribute called victorian_room can contain the actions that must be delivered in a specific room called the Victorian Room, such as snippets of information about the Victorian exhibit to be delivered as statements.
Similar to actions, attributes are also categorized as content attributes, interaction attributes, and triggered action attributes. You must group your actions under appropriate attributes of the appropriate category.
Attributes personalize interactions between your solution and end-users by shaping and classifying actions. They can be considered as "folders" that group related or similar actions.
For example, considering the example of the Victorian Room, you can create the following attributes and actions:
victorian_room_content
Content
video1
violin_music
victorian_room_interactions
Interaction
question1
introduction_statement
victorian_room_ta
Triggered action
personalized_greeting
statement_ta
As your solution relies on EDAA™ to trigger questions to the participants, it is crucial to design the questions based around the experience.
For example, if you want to understand whether the user is interested in learning more about Victorian era (based on which they can be redirected to the Victorian Room), you should design your questions accordingly.
For more information, see:
Example: Creating an action
The following example illustrates how to create a triggered action using the FeedingData/Create API. In this example, the action is triggered when a user enters the Victorian Room:
API
POST {{Server_URL}}/api/services/app/v2/FeedingData/Create
To learn more about the API, see Add an action.
Request sample
{
"projectId": {{projectID}}, // The project ID of the project
"identity": "user_enters_victorian_room",
"feeding_Value": "user_enters_victorian_room",
"feeding_Action_Category_ID": 3, // 3 => Triggered action (The action category)
"feeding_Action_Type_ID": 18, // 18 => Value (The action type)
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true
}Response sample
{
"result": {
"projectId": 742,
"identity": "user_enters_victorian_room",
"feeding_Value": "user_enters_victorian_room",
"feeding_Action_Type_ID": 18,
"feeding_Action_Category_ID": 3,
"isImported": false,
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true,
"id": 127544
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating an attribute
The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered when a user enters the Victorian Room:
API
POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create
To learn more about the API, see Create a triggered action attribute.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"name": "victorian_room_entry",
"action_Type_ID": 18, // 18 => Value (The action type)
"feedingDataIds": [
127544,127545,127546 // The triggered actions that are grouped under the attribute
]
}Response sample
{
"result": {
"id": 530,
"name": "victorian_room_entry",
"isDeleted": false,
"isImported": false,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"action_Type_ID": 18,
"tenantName": null,
"projectName": null,
"project_Domain_Name": null,
"project_Function_Name": null,
"feedingDatasIds": null
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}To create an content attribute, you can use the
FeedingContentAttribute/CreateAPI. (To learn more about the API, see Create a content attribute.)To create an interaction attribute, you can use the
FeedingInteractionAutomation/CreateAPI. (To learn more about the API, see Create an interaction attribute.)
Parameterize interaction channels
You must declare an interaction channels for each (type of) interaction you want to set up between your solution and end-users.
Affective Computing by Virtue supports the following types of interaction channels:
Input channel
These channels enable EDAA™ to receive information to change the state of something. For example, if the user can select options to indicate their preferences using an app, you must declare the app's UI as an input channel. (The personalized digital guide could then customize the user's journey through the exhibits on display in the museum based on their responses.)
Output channel
These channels enable EDAA™ to channel an action through something.
For example, if the personalized messages for users must be displayed on a screen, you must declare the screen as the output channel.
For more information, see:
Example 1: Creating an input channel
The following example illustrates how to create an input interaction channel using the InteractionChannel/Create API:
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"interaction_Channel_Types_Id": 1, // 1 => Input (Channel type)
"interaction_Input_Types_Id": 3, // 3 => QR (Input type)
"identifier": "victorian_room_ic",
"value": "victorian_room_ic",
"active": true,
"interaction_Input_Category_Id": 472 // The triggered action attribute category that groups actions triggered when a user enters a room
}
Response sample
{
"result": {
"tenantId": 6,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": null,
"tenantName": null,
"projectName": null,
"identifier": "victorian_room_ic",
"value": null,
"active": false,
"interaction_Input_Category_Id": null,
"interaction_Input_Category_Name": null,
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": null,
"destination_Entity_Types_Id": null,
"destination_Entity_Types_Name": null,
"destination_Entity_Id": null,
"destination_Entity_Object_Id": null,
"interaction_Channel_Types_Id": 1,
"id": 2480
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example 2: Creating an output channel
The following example illustrates how to create an output interaction channel using the InteractionChannel/Create API:
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"interaction_Channel_Types_Id": 2, // 2 => Output (Channel type)
"identifier": "victorian_room_oc",
"value": "display_greeting",
"active": true,
"Triggered_Action_Attribute_Id": 530, // The triggered action attribute category that groups the required triggered actions
}Response sample
{
"result": {
"tenantId": 6,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": null,
"tenantName": null,
"projectName": null,
"identifier": "victorian_room_oc",
"value": null,
"active": false,
"interaction_Input_Category_Id": null,
"interaction_Input_Category_Name": null,
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": null,
"destination_Entity_Types_Id": 2,
"destination_Entity_Types_Name": null,
"interaction_Channel_Types_Id": 2,
"id": 2481
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize logics
Logics define (or modify) your solution's behavior and enable you to personalize interactions.
Your solution can deliver an action only if a corresponding logic exists. Therefore, parameterizing logics is a critical step.
A logic has the following components:
Activator
The recipient of the logic based on psychological profile (except in the case of diagnostics logics).
Condition
All events that can trigger the logic.
Action
The resulting action that needs to be delivered. It can either be one specific action or any action grouped under an attribute
Operators
Logical operators (AND and OR) that define the flow of the logic and enable you to introduce multiple conditions and rules of interdependence between conditions and actions
For more information, see:
For this solution, you can create the following logics:
Diagnostics
Used for the diagnostics process.
Activator
All new users
Condition
Whenever a new user is detected by EDAA™ (and is interacting with the solution for the first time)
Action
A set of questions to help EDAA™ establish the preliminary psychological profile of the user:
Five initial questions are pre-defined by EDAA™.
(Optional) You can create 3 additional questions to establish the user's persona type (to activate specific actions leveraging attribute names.)
Interactions in Q&A format
Defines how the personalized digital guide interacts with users.
Activator
All profiles
Condition
A user is leaving a room after finishing viewing an exhibit.
Action
The attribute that contains the questions.
Delivering personalized content
Defines how personalized content is delivered to users at specific stages of the experience
Activator
Specific profile type
Condition
A user is detected through facial recognition or because they scanned a QR code, which provides EDAA™ with their location.
Action
The attribute that contains the personalized content.
Example: Creating a logic blueprint
The following example describes how to create a logic blueprint using the logics/CreateLogic API:
API
POST {{Server_URL}}/api/services/app/logics/CreateLogic
To learn more about the API, see Create a logic blueprint.
Request sample
{
"logicName": "Movement_victorian_room",
"bluePrinting_logics_type_id": 2, // 2 => User calibration (logic type)
"projectId": {{data_collection_projectID}}, // The project ID of the project
"activator": 1, // 1 => Profile (Logic activator)
"activator_Type": 22, // 2 => All profiles (Logic activator type)
"anchored": true
}
Response sample
{
"result": {
"logicId": 48165
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example: Creating a logic condition
The following example describes how to create and configure the condition that triggers a logic using the logics/CreateLogicCondition API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicCondition
To learn more about the API, see Add a logic condition to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicConditionList": [
{
"condition_id": 4, // 4 => Environmental (condition)
"condition_type_id": 39, // 39 => Camera ID (condition type)
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2480 // The input of a specific camera detecting a user
}
]
}, {
"condition_id": 2, // 2 => User motion (condition)
"condition_type_id": 6, // 6 => User detected (condition type)
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2482 // The input of detecting a user
}
]
}, {
"condition_id": 7, // 7 => User Geotargeting (condition)
"condition_type_id": 44, // 44 => QR (condition type)
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2482 // The input of a user scanning a specific QR code
}
]
}
}
Response sample
{
"result": {
"condition_Ids": [
53036, // The condition ID for camera facial recognition
53037, // The condition ID for user detection
53038 // The condition ID for scanning a QR code
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating a logic action
The following example describes how to configure the action delivered when a logic is triggered using the logics/CreateLogicAction API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicAction
To learn more about the API, see Add a logic action to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicActionList": [
{
"execution_order": 0,
"feeding_content_Direction_Id": 3, // 3 => Output as interaction channel
"action_type_Id": 4, // 4 => Triggered action
"interaction_Channels_Id": 2481, // The interaction channel for actions that occur when a user enters the room
"anchored": true
}
]
}Response sample
{
"result": {
"actionIds": [
22903
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Mapping a logic action with a logic condition
The following example describes how to map a logic action with a logic condition using the logics/CreateLogicActionMapping API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicActionMapping
To learn more about the API, see Map conditions to actions for a logic blueprint.
Request sample
{
"logicId": 48165, // The logic
"conditionActionList": [
{
"conditionId": 53036, // The condition ID for camera facial recognition
"actionId": 22903, // The interaction channel that contains triggered actions of possible actions when the user enters the room
"logical_Operator": "And" // As one condition is linked to one action, this doesn't have any effect
}
]
}
Response sample
{
"result": {
"conditionActionMappingId": [
17375
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Step 3: Run the experience
After designing and testing your solution's workflow and other aspects, you can launch it in the real-world environment (in this case, the museum) and run the experience.
Declaring end-users
Users who interact directly with the solution as end-users and participate in interactions are called external users.
External users could be the same or a completely different set of users from internal users (administrative users who design and parameterize the solution).
When running the solution, you must declare your end-users (external users).
Doing this generates their user ID, which is a unique ID that enables identifying them, tracking their activity, and monitoring the impact of the solution experience on them.
You can declare end-users using the ExternalUser/Create API. To learn more about the API, see Create a user ID for a new user.
Running the solution in product calibration mode
Product calibration mode is the operational mode in which EDAA™ can establish the solution’s reality (context) and understand the product's motivations.
This mode enables Affective Computing by Virtue to validate and align the solution’s end goal with the ideal outcome of the provided experience.
Before launching your solution by releasing it to production and allowing Affective Computing by Virtue to switch to action mode, we recommend that you do the following:
Run your solution for a week in product calibration mode. You can use the
Project/ActivateProductCalibrationAPI to enable this mode.
After EDAA™ establishes the product calibration baseline, turn off this mode. You can use the
Project/DeactivateProductCalibrationAPI to disable the product calibration mode.
For more information, see Running a solution in product calibration mode.
Running interactions
From Affective Computing by Virtue's point of view, running an experience simply means enabling the interactions between the solution (in this case, the personalized digital guide) and users to execute (run).
To enable running the experience, for each type of interaction it includes, you must do the following:
Initialize interaction sessions using the
Interaction/InitializeSessionAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/InitializeSession
To learn more about the API, see Initialize an interaction session.
Request sample
{
"externalUserID":"50c91e36-3f69-4245-ad20-53db39d780c9", // unique identifier of the user
"projectID": 493, // The project ID of the project
"foreign_identity": "Determining whether the user would be interested in the Victorian exhibit",
"language": "en",
"client_ip": "10.1.192.128"
}Response sample
{
"result": {
"interaction_Session": 1269064,
"isValidated": false,
"force_user_calibration": false
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Interact using the
Interaction/InteractAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/Interact
To learn more about the API, see Perform an interaction.
Request sample
{
"interaction_Session": 35006, // session ID of the initialized interaction session
"beats_Per_Minute":75,
"time_Taken_To_Response": "1",
"interact_type": "external_signal",
"interact_value": "victorian_room_ic", // Interaction channel that provides the input as an external signal
"mode": "action"
}Response sample
{
"result": {
"sound": null,
"statement": "",
"question": "",
"content": [],
"music": "",
"action": null,
"interaction_channel_id": 530, // Interaction channel ID of the input channel configured as the interact value
"triggered_action": "victorian_room_ta", // Triggered action delivered as part of the interaction
"last_stage": false,
"repeat_stage": false,
"audio_speed": "medium",
"change_mode": null,
"status": "success",
"errorcode": "",
"errorMessage": ""
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}End interaction sessions using the
Interaction/end_interactionAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/end_interaction
To learn more about the API, see End an interaction session.
Request sample
{
"interaction_Session": 35006
}Response sample
{
"result": true,
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Step 4: Analyze its results
You can analyze the level of engagement in each session to validate your solution's performance. You can do this using the Reporting/GetEngagementBySession API. See the following example:
API
GET {{Server_URL}}/api/services/app/Reporting/GetEngagementBySession
To learn more about the API, see Viewing insights from project data.
Request sample
{
"sessionId": 238944,
"projectID": {{projectID}}, // The project ID of the project
"foreign_identity": "Conversation between digital guide and user",
"language": "en",
"client_ip": "10.1.192.128"
}
Response sample
{
"result": [
{
"sessionId": 238944,
"engagement": 0.0,
"stage": 0,
"action": "",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 1,
"action": "Hello , how are you ?",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 1.0,
"stage": 4,
"action": "What era of English history do you prefer?",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 5,
"action": "We have an excellent interactive exhibit about the Tudor era, how do you feel about that?",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 6,
"action": "Yes, we also have a collection of paintings in the Victorian Room.",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 7,
"action": "The application will guide you to the Victorian Room.",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 38.0,
"stage": 8,
"action": "Unfortunately, that exhibit is closed right now. Can I direct you to the Victorian room?",
"entityId": 16812,
"object_status": []
}
],
"targetUrl": null,
"success": true,Additionally, based on the requirements of your solution, you can use an external tool to consume the results.
Last updated
Was this helpful?