Tutorial: Elevating museum experiences with personalized digital guides
A step-by-step guide to building an example solution that delivers hyper-personalized content in real time.
This tutorial explores an example solution powered by Virtue's framework that enhances end user experiences by delivering content that is meticulously tailored to the interests, personality, and mood of users in real time.
During interactions with users, the solution delivers dynamically-curated content designed especially for them to increase their engagement and satisfaction.
This tutorial walks you through the low-code approach (advanced integration) to building this example solution using our public APIs.
However, you can also perform some steps, such as creating the project and parameterizing some project components using our no-code SaaS GUI Portal. To learn more about using the Portal, see Using the Portal.
Business case
Cultural centers such as museums can elevate the experiences they offer to visitors by using personalized digital guides. These intelligent guides can intuitively adjust narratives and vary content in real time, providing visitors with exclusive personalized experiences that cater to their preferences and respond appropriately in accordance with their emotional state.
This, in turn, can enable these centers improve the experiences they offer and boost loyalty and customer satisfaction.
Challenges
Setting up a digital guide has the following challenges:
Conventional digital guides are unable to predict users' profiles and real-time mood to define optimum flows and enhance experience delivery.
They are also unable to customize and curate content to deliver high personalization.
As a solution designer, you have limited observability over user engagement and data to dynamically improve user experience.
Designing and implementing a Virtue-powered solution
Leveraging Affective Computing by Virtue's framework when implementing your solution for this business case is an excellent choice.
You can take advantage of features such as psychological profiling and continuous user calibration to increase customer retention and provide deeply personalized experiences.
Overview
An Affective Computing by Virtue-powered solution that aims at personalizing end-user experiences typically consists of the following phases:
We recommend researching and understanding your solution's purpose and goals and the specifics of its real-world implementation, including procuring and setting up all necessary devices and connecting any external tools before starting to implement your Affective Computing by Virtue-powered solution.
To learn more, see How to set up a solution.
This tutorial focuses on the concepts and steps related to setting up and parameterizing your project after you have all of these things in place.
Data collection - Diagnostics and calibration
Diagnostics is the process in which EDAA™, the underlying framework of Affective Computing by Virtue, analyzes and establishes a preliminary (baseline) psychological profile for each user.
Calibration is the process of validating and adjusting each user's previously-established psychological profile based on their current psychological state.
For detailed information, see Diagnostics and calibration.
Both these processes are important parts of data collection, which is the first phase of implementing the solution. Data collection provides your solution with baselines, context, and boundaries.
To collect this initial data, you can either run a real experience or input synthetic data from an external data set.
Interactions between users and the solution
In the next phase of implementing the solution, EDAA™ interacts with end users based on the preconfigured logic blueprints.
For example:
On entering the museum, each user is greeted with:
A personalized audio message that addresses them by name (for example, "Hello, John!")
Welcoming music tailored to their individual preferences
Additionally, throughout the experience, the personalized digital guide has interactive conversations with each user and recommends museum installations that they would enjoy based on their psychological profile and real-time mood.
Result analysis - Visualizing data
In the final phase of implementing the solution, you can analyze user engagement levels in each interaction session to gain data insights and visibility into the solution's performance, which in turn can help you do the following:
Enhance learning and exploration and encourage deeper engagement with the subject matter.
Dynamically guide visitors through personalized museum journeys based on their individual interests, behaviors, and real-time crowd dynamics.
Optimize visitor flow and satisfaction.
Implementing the solution
The process of implementing the solution consists of the following steps:
Step 1: Create the project, which is used for both data collection and running the experience
Working with our APIs
As an internal user responsible for setting up and managing an Affective Computing by Virtue project, you must use the API key and access token that enables you to integrate your front end with Orchestra to work with our APIs.
For more information, see Authenticating your data source to use our APIs.
Step 1: Create the project
You can create a project by following the procedure described in Creating and managing projects. You can use this project for both data collection and running the experience.
Example: Creating a project
The following example illustrates how to create a project using the Project/Create API:
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
Response sample
Step 2: Parameterize the project
After creating the project, you can parameterize it to customize the behavior of Affective Computing by Virtue according to the requirements of your solution.
To learn more about parameterization, see Parameterizing a project.
During this step, you can do the following:
Parameterize data inputs
You must declare (define) data inputs for each type of (raw or synthetic) external data you want to utilize in your solution. You can do this at the same time as creating the project.
For this solution, you can configure the following data inputs:
Real human user
Physiological
Speech, through a microphone
User motion
Declaration of rooms within the museum
Conversation initiation or trigger and answers (Time to answer, number of words, pauses between words, duration of the answer, meaning, and more)
Facial recognition, scanning a QR code, and other interactions with a front-end of an application depending on how the interactive experiences in the museum are designed
For more information, see Understanding data inputs.
Example: Configuring data inputs
The following example illustrates how to declare data inputs using the interactionsSetups parameter of the Project/Create API.
This parameter enables you to declare inputs and their respective operational modes.
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project. Also see Viewing available data inputs (interaction setups).
Request sample
Response sample
Parameterize actions and attributes
Affective Computing by Virtue supports the following categories of actions:
Content
Delivering any media, such as images, video, or sound.
Interactions
Delivering statements or asking questions
Triggered actions
Delivering an action as a response to specific events or conditions.
For this solution, you can create all three categories of actions for different contexts:
You can create content actions to deliver content related to the museum experience. For example, in a digital art museum, you can display images, animation, or videos on a screen and play background music in a loop.
You can create interaction actions for conversations between the digital guide and the user or sharing audio descriptions about an exhibit with the user.
You can create triggered actions to deliver personalized content. For example, when a user enters a room, the solution can display a customized greeting on a screen ("Welcome to the Victorian Room, John!").
Attributes can be considered as "folders" that group related or similar actions.
For example, an attribute called victorian_room can contain the actions that must be delivered in a specific room called the Victorian Room, such as snippets of information about the Victorian exhibit to be delivered as statements.
Similar to actions, attributes are also categorized as content attributes, interaction attributes, and triggered action attributes. You must group your actions under appropriate attributes of the appropriate category.
Attributes personalize interactions between your solution and end-users by shaping and classifying actions. They can be considered as "folders" that group related or similar actions.
For example, considering the example of the Victorian Room, you can create the following attributes and actions:
victorian_room_content
Content
video1
violin_music
victorian_room_interactions
Interaction
question1
introduction_statement
victorian_room_ta
Triggered action
personalized_greeting
statement_ta
As your solution relies on EDAA™ to trigger questions to the participants, it is crucial to design the questions based around the experience.
For example, if you want to understand whether the user is interested in learning more about Victorian era (based on which they can be redirected to the Victorian Room), you should design your questions accordingly.
For more information, see:
Example: Creating an action
The following example illustrates how to create a triggered action using the FeedingData/Create API. In this example, the action is triggered when a user enters the Victorian Room:
You can use the feeding_Action_Category_ID parameter to configure the action category. In this example, its value is configured as 3, which denotes triggered actions.
API
POST {{Server_URL}}/api/services/app/v2/FeedingData/Create
To learn more about the API, see Add an action.
Request sample
Response sample
Example: Creating an attribute
The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered when a user enters the Victorian Room:
API
POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create
To learn more about the API, see Create a triggered action attribute.
Request sample
Response sample
To create an content attribute, you can use the
FeedingContentAttribute/CreateAPI. (To learn more about the API, see Create a content attribute.)To create an interaction attribute, you can use the
FeedingInteractionAutomation/CreateAPI. (To learn more about the API, see Create an interaction attribute.)
Parameterize interaction channels
You must declare an interaction channels for each (type of) interaction you want to set up between your solution and end-users.
Affective Computing by Virtue supports the following types of interaction channels:
Input channel
These channels enable EDAA™ to receive information to change the state of something. For example, if the user can select options to indicate their preferences using an app, you must declare the app's UI as an input channel. (The personalized digital guide could then customize the user's journey through the exhibits on display in the museum based on their responses.)
Output channel
These channels enable EDAA™ to channel an action through something.
For example, if the personalized messages for users must be displayed on a screen, you must declare the screen as the output channel.
For more information, see:
Example 1: Creating an input channel
The following example illustrates how to create an input interaction channel using the InteractionChannel/Create API:
You can use the interaction_Channel_Types_Id parameter to configure the channel type, which determines the direction of data flow in the channel.
In this example, its value is configured as 1, which denotes an input interaction channel.
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
Response sample
Example 2: Creating an output channel
The following example illustrates how to create an output interaction channel using the InteractionChannel/Create API:
You can use the interaction_Channel_Types_Id parameter to configure the channel type, which determines the direction of data flow in the channel.
In this example, its value is configured as 2, which denotes an output interaction channel.
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
Response sample
Parameterize logics
Logics define (or modify) your solution's behavior and enable you to personalize interactions.
Your solution can deliver an action only if a corresponding logic exists. Therefore, parameterizing logics is a critical step.
A logic has the following components:
Activator
The recipient of the logic based on psychological profile (except in the case of diagnostics logics).
Condition
All events that can trigger the logic.
Action
The resulting action that needs to be delivered. It can either be one specific action or any action grouped under an attribute
Operators
Logical operators (AND and OR) that define the flow of the logic and enable you to introduce multiple conditions and rules of interdependence between conditions and actions
You can anchor actions to ensure that EDAA™ doesn’t change (generatively evolve) it and doesn’t generate any actions inside the parent attribute.
You can anchor logics to ensure that EDAA™ doesn’t generate new logics based on it and always uses it as-is. However, anchoring logics reduces personalization.
For more information, see:
For this solution, you can create the following logics:
Diagnostics
Used for the diagnostics process.
Interactions in Q&A format
Defines how the personalized digital guide interacts with users.
Delivering personalized content
Defines how personalized content is delivered to users at specific stages of the experience
Example: Creating a logic blueprint
The following example describes how to create a logic blueprint using the logics/CreateLogic API:
API
POST {{Server_URL}}/api/services/app/logics/CreateLogic
To learn more about the API, see Create a logic blueprint.
Request sample
Response sample
Example: Creating a logic condition
The following example describes how to create and configure the condition that triggers a logic using the logics/CreateLogicCondition API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicCondition
To learn more about the API, see Add a logic condition to a logic blueprint.
Request sample
Response sample
Example: Creating a logic action
The following example describes how to configure the action delivered when a logic is triggered using the logics/CreateLogicAction API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicAction
To learn more about the API, see Add a logic action to a logic blueprint.
Request sample
Response sample
Example: Mapping a logic action with a logic condition
The following example describes how to map a logic action with a logic condition using the logics/CreateLogicActionMapping API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicActionMapping
To learn more about the API, see Map conditions to actions for a logic blueprint.
Request sample
Response sample
Step 3: Run the experience
After designing and testing your solution's workflow and other aspects, you can launch it in the real-world environment (in this case, the museum) and run the experience.
Declaring end-users
Users who interact directly with the solution as end-users and participate in interactions are called external users.
External users could be the same or a completely different set of users from internal users (administrative users who design and parameterize the solution).
When running the solution, you must declare your end-users (external users).
Doing this generates their user ID, which is a unique ID that enables identifying them, tracking their activity, and monitoring the impact of the solution experience on them.
You can declare end-users using the ExternalUser/Create API. To learn more about the API, see Create a user ID for a new user.
Running the solution in product calibration mode
Product calibration mode is the operational mode in which EDAA™ can establish the solution’s reality (context) and understand the product's motivations.
This mode enables Affective Computing by Virtue to validate and align the solution’s end goal with the ideal outcome of the provided experience.
Before launching your solution by releasing it to production and allowing Affective Computing by Virtue to switch to action mode, we recommend that you do the following:
Run your solution for a week in product calibration mode. You can use the
Project/ActivateProductCalibrationAPI to enable this mode.
When running in product calibration mode, your solution does not provide personalization, as this initial run aims at understanding the solution's context.
After EDAA™ establishes the product calibration baseline, turn off this mode. You can use the
Project/DeactivateProductCalibrationAPI to disable the product calibration mode.
For more information, see Running a solution in product calibration mode.
Running interactions
From Affective Computing by Virtue's point of view, running an experience simply means enabling the interactions between the solution (in this case, the personalized digital guide) and users to execute (run).
To enable running the experience, for each type of interaction it includes, you must do the following:
Initialize interaction sessions using the
Interaction/InitializeSessionAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/InitializeSession
To learn more about the API, see Initialize an interaction session.
Request sample
Response sample
Interact using the
Interaction/InteractAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/Interact
To learn more about the API, see Perform an interaction.
Request sample
Response sample
End interaction sessions using the
Interaction/end_interactionAPI. See the following example:
API
POST {{Server_URL}}/api/app/v2/Interaction/end_interaction
To learn more about the API, see End an interaction session.
Request sample
Response sample
Step 4: Analyze its results
You can analyze the level of engagement in each session to validate your solution's performance. You can do this using the Reporting/GetEngagementBySession API. See the following example:
API
GET {{Server_URL}}/api/services/app/Reporting/GetEngagementBySession
To learn more about the API, see Viewing insights from project data.
Request sample
Response sample
Additionally, based on the requirements of your solution, you can use an external tool to consume the results.
Last updated
Was this helpful?